www.digitalmars.com         C & C++   DMDScript  

digitalmars.D - Final by default?

reply Walter Bright <newshound2 digitalmars.com> writes:
The argument for final by default, as eloquently expressed by Manu, is a good 
one. Even Andrei agrees with it (!).

The trouble, however, was illuminated most recently by the std.json regression 
that broke existing code. The breakage wasn't even intentional; it was a 
mistake. The user fix was also simple, just a tweak here and there to user
code, 
and the compiler pointed out where each change needed to be made.

But we nearly lost a major client over it.

We're past the point where we can break everyone's code. It's going to cost us 
far, far more than we'll gain. (And you all know that if we could do massive 
do-overs, I'd get rid of put's auto-decode.)

Instead, one can write:

    class C { final: ... }

as a pattern, and everything in the class will be final. That leaves the "but 
what if I want a single virtual function?" There needs to be a way to locally 
turn off 'final'. Adding 'virtual' is one way to do that, but:

1. there are other attributes we might wish to turn off, like 'pure' and
'nothrow'.

2. it seems excessive to dedicate a keyword just for that.

So, there's the solution that has been proposed before:

    !final
    !pure
    !nothrow
    etc.
Mar 12 2014
next sibling parent reply "Namespace" <rswhite4 googlemail.com> writes:
As long as it works like the opposite and can be labeled as they, 
I like it. It's a smart idea.
Mar 12 2014
parent reply "Namespace" <rswhite4 googlemail.com> writes:
Would it mean that we would deprecate "nothrow" and replace it 
with "!throw"?
Mar 12 2014
parent reply "deadalnix" <deadalnix gmail.com> writes:
On Wednesday, 12 March 2014 at 23:00:26 UTC, Namespace wrote:
 Would it mean that we would deprecate "nothrow" and replace it 
 with "!throw"?
I like that !
Mar 12 2014
parent Walter Bright <newshound2 digitalmars.com> writes:
On 3/12/2014 9:57 PM, deadalnix wrote:
 On Wednesday, 12 March 2014 at 23:00:26 UTC, Namespace wrote:
 Would it mean that we would deprecate "nothrow" and replace it with "!throw"?
I like that !
Not no-how, not no-way!
Mar 12 2014
prev sibling next sibling parent reply luka8088 <luka8088 owave.net> writes:
On 12.3.2014. 23:50, Walter Bright wrote:
 But we nearly lost a major client over it.
How do you nearly lose a client over a change in a development branch which was never a part of any release? (or am I mistaken?) You seem to have a very demanding clients :) On a side thought, maybe there should also be a stable branch?
Mar 12 2014
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 3/12/2014 4:01 PM, luka8088 wrote:
 How do you nearly lose a client over a change in a development branch
 which was never a part of any release? (or am I mistaken?)
The change went into a release.
 You seem to have a very demanding clients :)
Many clients have invested a great deal into D. They're entitled to be demanding.
Mar 12 2014
next sibling parent reply Andrej Mitrovic <andrej.mitrovich gmail.com> writes:
On 3/13/14, Walter Bright <newshound2 digitalmars.com> wrote:
 On 3/12/2014 4:01 PM, luka8088 wrote:
 How do you nearly lose a client over a change in a development branch
 which was never a part of any release? (or am I mistaken?)
The change went into a release.
Couldn't the client just file a bug? We've already agreed we'd make point releases for regression fixes. Client files bug, we fix it, upload the new compiler, everybody is happy. In the meantime the client could have continued to use the previous release.
Mar 13 2014
parent "Rikki Cattermole" <alphaglosined gmail.com> writes:
On Thursday, 13 March 2014 at 08:02:22 UTC, Andrej Mitrovic wrote:
 On 3/13/14, Walter Bright <newshound2 digitalmars.com> wrote:
 On 3/12/2014 4:01 PM, luka8088 wrote:
 How do you nearly lose a client over a change in a 
 development branch
 which was never a part of any release? (or am I mistaken?)
The change went into a release.
Couldn't the client just file a bug? We've already agreed we'd make point releases for regression fixes. Client files bug, we fix it, upload the new compiler, everybody is happy. In the meantime the client could have continued to use the previous release.
Is there currently a snapshot installers for D? This may be a huge help in these cases. Aka nightly releases based off of commits.
Mar 13 2014
prev sibling parent luka8088 <luka8088 owave.net> writes:
On 13.3.2014. 0:48, Walter Bright wrote:
 On 3/12/2014 4:01 PM, luka8088 wrote:
 How do you nearly lose a client over a change in a development branch
 which was never a part of any release? (or am I mistaken?)
The change went into a release.
I see, that indeed is an issue.
Mar 13 2014
prev sibling next sibling parent Joseph Rushton Wakeling <joseph.wakeling webdrake.net> writes:
On 12/03/14 23:50, Walter Bright wrote:
 The trouble, however, was illuminated most recently by the std.json regression
 that broke existing code. The breakage wasn't even intentional; it was a
 mistake. The user fix was also simple, just a tweak here and there to user
code,
 and the compiler pointed out where each change needed to be made.

 But we nearly lost a major client over it.
I think for clarity it might be good to understand: was this near-loss because the client felt the language might have breaking changes in future, or because this breaking change had happened suddenly and with no warning? A well-signposted and well-policed deprecation path is after all very different from what happened with std.json.
 So, there's the solution that has been proposed before:

    !final
    !pure
    !nothrow
    etc.
These sound nice to have for themselves, whatever is decided about final-by-default.
Mar 12 2014
prev sibling next sibling parent reply =?UTF-8?B?QWxpIMOHZWhyZWxp?= <acehreli yahoo.com> writes:
On 03/12/2014 03:50 PM, Walter Bright wrote:

 So, there's the solution that has been proposed before:

     !final
     !pure
     !nothrow
     etc.
The same issue came up with 'version' recently. It is possible to start a version block like this: version(something): However, it is not possible to get to the "else" part of that version block using the above syntax. Would the same syntax work for version as well? version(something): // ... !version(something): // ... Note: Yes, it is possible to use curly braces for the same effect: version(something) { // ... } else { // ... } The question is whether it makes sense to be consistent for version (and debug, etc.) as well. Ali
Mar 12 2014
next sibling parent reply Joseph Rushton Wakeling <joseph.wakeling webdrake.net> writes:
On 13/03/14 00:14, Ali Çehreli wrote:
 However, it is not possible to get to the "else" part of that version block
 using the above syntax. Would the same syntax work for version as well?

      version(something):
          // ...

      !version(something):
          // ...
In that context, doesn't it make more sense: version(!something): // ...
Mar 12 2014
parent =?UTF-8?B?QWxpIMOHZWhyZWxp?= <acehreli yahoo.com> writes:
On 03/12/2014 04:34 PM, Joseph Rushton Wakeling wrote:
 On 13/03/14 00:14, Ali Çehreli wrote:
 However, it is not possible to get to the "else" part of that version
 block
 using the above syntax. Would the same syntax work for version as well?

      version(something):
          // ...

      !version(something):
          // ...
In that context, doesn't it make more sense: version(!something): // ...
Agreed. Ali
Mar 12 2014
prev sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 3/12/2014 4:14 PM, Ali Çehreli wrote:
 The same issue came up with 'version' recently. It is possible to start a
 version block like this:

      version(something):

 However, it is not possible to get to the "else" part of that version block
 using the above syntax. Would the same syntax work for version as well?

      version(something):
          // ...

      !version(something):
          // ...
Yes, this has come up before, in various forms. The short answer is unequivocably "no". I've expounded on this extensively in this n.g., but don't have a reference handy.
Mar 12 2014
next sibling parent reply "Temtaime" <temtaime gmail.com> writes:
Why you are speaking about breaking the code?
D was never been stable. It is full of bugs. Even a monkey can 
find a bug in DMD.
I think it's ok to make breaking changes if they are necessary.
Mar 13 2014
parent "Temtaime" <temtaime gmail.com> writes:
Also it provides a tons of regressions with each release.
Well, do not speak about stable.
Mar 13 2014
prev sibling next sibling parent reply Nick Treleaven <ntrel-public yahoo.co.uk> writes:
On 12/03/2014 23:50, Walter Bright wrote:
 However, it is not possible to get to the "else" part of that version
 block
 using the above syntax. Would the same syntax work for version as well?

      version(something):
          // ...

      !version(something):
          // ...
Yes, this has come up before, in various forms. The short answer is unequivocably "no". I've expounded on this extensively in this n.g., but don't have a reference handy.
Some discussion here: https://d.puremagic.com/issues/show_bug.cgi?id=7417 I understand the argument about not allowing version(X && Y), but I do think this construct is ugly: version(Foo) {} else { //code Defining a version identifier NotFoo as well as Foo would be a bad solution. What's wrong with: version(!Foo) {} ?
Mar 13 2014
parent reply "Daniel Murphy" <yebbliesnospam gmail.com> writes:
"Nick Treleaven"  wrote in message news:lfsb5c$1kvi$1 digitalmars.com...

 Some discussion here:
 https://d.puremagic.com/issues/show_bug.cgi?id=7417

 I understand the argument about not allowing version(X && Y), but I do 
 think this construct is ugly:

 version(Foo) {} else { //code

 Defining a version identifier NotFoo as well as Foo would be a bad 
 solution.

 What's wrong with:

 version(!Foo) {}
This is one of the few areas where D is idealistic instead of pragmatic. For some reason Walter has decided unlike most things, this needs to be forced upon everyone. This reminds me of banning goto and multiple returns because 'structured programming' prevents goto spaghetti madness. As always, it simply forces the programmer to jump through more hoops to get what they want.
Mar 13 2014
next sibling parent Walter Bright <newshound2 digitalmars.com> writes:
On 3/13/2014 6:23 AM, Daniel Murphy wrote:
 For some reason Walter has decided
I've expounded on this at length, it is not "some reason".
Mar 13 2014
prev sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 3/13/2014 6:23 AM, Daniel Murphy wrote:
 This is one of the few areas where D is idealistic instead of pragmatic.
Nope. It's a very pragmatic decision.
 This reminds me of banning goto and multiple returns because 'structured
 programming' prevents goto spaghetti madness.  As always, it simply forces the
 programmer to jump through more hoops to get what they want.
I've seen some of those hoops programmers have done to get that behavior in D, and it resulted in just what I predicted - confusing bugs due to wacky dependencies between modules. Bluntly, if your code requires more than version(Feature) you are doing it wrong. I haven't yet seen an example of boolean version expressions that made the code clearer, simpler, or more maintainable than version(Feature). I've seen endless examples of boolean version expressions that are a rat's nest of unmaintainable, buggy, confusing garbage. I've made a major effort to remove all that garbage from dmd's source code, for example, and am very pleased with the results. There's still some in druntime that I wish to get refactored out. And yes, I'm pretty opinionated about this :-)
Mar 13 2014
parent "Daniel Murphy" <yebbliesnospam gmail.com> writes:
"Walter Bright"  wrote in message news:lfsv9r$263r$1 digitalmars.com...

 Nope. It's a very pragmatic decision.
Errr...
 I've seen some of those hoops programmers have done to get that behavior 
 in D, and it resulted in just what I predicted - confusing bugs due to 
 wacky dependencies between modules.

 Bluntly, if your code requires more than version(Feature) you are doing it 
 wrong.

 I haven't yet seen an example of boolean version expressions that made the 
 code clearer, simpler, or more maintainable than version(Feature).
I agree! Writing code this way pretty much always leads to better organisation and encourages separating things by feature instead of irrelevant things like host platform. HOWEVER - forcing this on everybody all the time is not a good thing. Not all code is for a long term or large project. A similar example is the common rule that functions should be short and do one thing. This is a great rule to keep things sane - but if the compiler errored when my functions got too long it would just be a huge pain in my ass. Why don't we ban goto? It can certainly be used to write confusing and unmaintainable code! So can switch! Operator overloading can be awful too! Most of the time, D gives me all the power and lets me decide how I use it. If I wanted those choices to be taken away from me, I'd be using go.
 I've seen endless examples of boolean version expressions that are a rat's 
 nest of unmaintainable, buggy, confusing garbage. I've made a major effort 
 to remove all that garbage from dmd's source code, for example, and am 
 very pleased with the results. There's still some in druntime that I wish 
 to get refactored out.
Of course, DMD is still full of this. DDMD has to use static if instead of version because of this.
 And yes, I'm pretty opinionated about this :-)
:-) That's fine, and I think you're right about it. But like "don't use goto" a compile error is the wrong place to enforce this.
Mar 14 2014
prev sibling parent reply "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Wed, 12 Mar 2014 19:50:14 -0400, Walter Bright  =

<newshound2 digitalmars.com> wrote:

 On 3/12/2014 4:14 PM, Ali =C3=87ehreli wrote:
 The same issue came up with 'version' recently. It is possible to sta=
rt =
 a
 version block like this:

      version(something):

 However, it is not possible to get to the "else" part of that version=
=
 block
 using the above syntax. Would the same syntax work for version as wel=
l?
      version(something):
          // ...

      !version(something):
          // ...
Yes, this has come up before, in various forms. The short answer is =
 unequivocably "no". I've expounded on this extensively in this n.g., b=
ut =
 don't have a reference handy.
Negating version is not the main problem, as one can do version(x){} = else{}, which is ugly but effective. Logical AND is also quite easy with version(x) version(y) {} The one I would really like to see is logical OR. There is no easy way = around this, one must come up with convoluted mechanisms that are much = harder to design, write, and understand than just version(x || y) -Steve
Mar 13 2014
parent reply Timon Gehr <timon.gehr gmx.ch> writes:
On 03/13/2014 02:32 PM, Steven Schveighoffer wrote:
 The one I would really like to see is logical OR. There is no easy way
 around this, one must come up with convoluted mechanisms that are much
 harder to design, write, and understand than just version(x || y)
version(A) version=AorB; version(B) version=AorB; version(AorB){ }
Mar 13 2014
next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 3/13/2014 3:21 PM, Timon Gehr wrote:
 On 03/13/2014 02:32 PM, Steven Schveighoffer wrote:
 The one I would really like to see is logical OR. There is no easy way
 around this, one must come up with convoluted mechanisms that are much
 harder to design, write, and understand than just version(x || y)
version(A) version=AorB; version(B) version=AorB; version(AorB){ }
If you're writing things like that, it's still missing the point. The point is not to find workarounds, but to rethink just what feature is being version'd on. For example, suppose it's wrapping a call to SomeWackyFunction: version (Linux) SomeWackyFunction(); else version (OSX) SomeWackyFunction(); else ... workaround ... Execrable answer: version (Linux) version=LinuxOrOSX; version (OSX) version=LinuxOrOSX; ... version (LinuxOrOSX) SomeWackyFunction(); else ... workaround ... This is execrable because LinuxOrOSX: 1. has no particular relationship to SomeWackyFunction() 2. makes for confusion if LinuxOrOSX is used also to version in other things 3. makes for what do I do if I add in FreeBSD? Rename to LinuxOrOSXOrFreeBSD ? yeech Better answer: version (Linux) version=hasSomeWackyFunction; version (OSX) version=hasSomeWackyFunction; ... version (hasSomeWackyFunction) SomeWackyFunction(); else ... workaround ... At least this is maintainable, though it's clumsy if that code sequence appears more than once or, worse, must be replicated in multiple modules. Even better: -------- import wackyfunctionality; ... WackyFunction(); -------- module wackyfunctionality; void WackyFunction() { version (Linux) SomeWackyFunction(); else version (OSX) SomeWackyFunction(); else ... workaround ... } -------- Simple, maintainable, easy on the eye.
Mar 13 2014
next sibling parent reply Timon Gehr <timon.gehr gmx.ch> writes:
On 03/13/2014 11:43 PM, Walter Bright wrote:
 version(A) version=AorB;
 version(B) version=AorB;
 version(AorB){ }
If you're writing things like that, it's still missing the point. ...
Unless the point is to demonstrate that it is not too convoluted. ;)
Mar 13 2014
parent reply "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Thu, 13 Mar 2014 18:55:26 -0400, Timon Gehr <timon.gehr gmx.ch> wrote:

 On 03/13/2014 11:43 PM, Walter Bright wrote:
 version(A) version=AorB;
 version(B) version=AorB;
 version(AorB){ }
If you're writing things like that, it's still missing the point. ...
Unless the point is to demonstrate that it is not too convoluted. ;)
And for that point, your demonstration has failed :) The above is very convoluted, not very DRY. -Steve
Mar 14 2014
parent reply "Ethan" <gooberman gmail.com> writes:
On Friday, 14 March 2014 at 14:02:06 UTC, Steven Schveighoffer
wrote:
 And for that point, your demonstration has failed :)
It's an extraordinarily simple use case, but it is still quite a common pattern in C++ defines, ie: #ifdef _DEBUG #define FEATUREA #define FEATUREB #define FEATUREC #else #define FEATUREB #define FEATUREC #endif #ifdef FEATUREB ... #endif In D, that would look a lot like: debug { version = FeatureA; version = FeatureB; version = FeatureC; } else { version = FeatureB; version = FeatureC; } version(FeatureB) { ... } So for an actual use case and not an abstract example, yes, the way suggested is not convoluted at all.
Mar 14 2014
parent reply "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Fri, 14 Mar 2014 10:11:25 -0400, Ethan <gooberman gmail.com> wrote:

 On Friday, 14 March 2014 at 14:02:06 UTC, Steven Schveighoffer
 wrote:
 And for that point, your demonstration has failed :)
It's an extraordinarily simple use case, but it is still quite a common pattern in C++ defines, ie: #ifdef _DEBUG #define FEATUREA #define FEATUREB #define FEATUREC #else #define FEATUREB #define FEATUREC #endif #ifdef FEATUREB ... #endif
No, not really, there was no else in the version. It's when you have two different versions that may trigger the same definition. #if defined(MODEA) || defined(MODEB) #define FEATUREA #define FEATUREB #endif In D, this is: version(MODEA) version = USEFEATUREAANDB; version(MODEB) version = USEFEATUREAANDB; ... version(USEFEATUREAANDB) // imagine reading this first, and wondering what causes it to compile. { version = FEATUREA; version = FEATUREB; } It's the equivalent of this in C: #ifdef MODEA #define USEFEATUREAANDB #endif #ifdef MODEB #define USEFEATUREAANDB #endif ... #ifdef USEFEATUREAANDB #define FEATUREA #define FEATUREB #endif Yes, there are real world examples where the lack of logical or makes things difficult to read. It's not all about versioning based on OS or boolean flags. -Steve
Mar 14 2014
parent reply "Ethan" <gooberman gmail.com> writes:
On Friday, 14 March 2014 at 14:26:20 UTC, Steven Schveighoffer
wrote:
 No, not really, there was no else in the version.
I'm aware of that, and you missed the point. There's no need to get so hung up on Walter's specific example when the pattern he suggests you follow is widely used in analagous situations.
Mar 14 2014
parent "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Fri, 14 Mar 2014 11:15:39 -0400, Ethan <gooberman gmail.com> wrote:

 On Friday, 14 March 2014 at 14:26:20 UTC, Steven Schveighoffer
 wrote:
 No, not really, there was no else in the version.
I'm aware of that, and you missed the point. There's no need to get so hung up on Walter's specific example when the pattern he suggests you follow is widely used in analagous situations.
I'm not hung up on his example, in fact, I said in many cases, factoring into a common version identifier is a good thing. But it's not always the answer, and removing boolean OR does not make things better in all cases. I have no problems with a version/else statement, I don't know why you brought it up, that works fine for me. -Steve
Mar 14 2014
prev sibling next sibling parent reply 1100110 <0b1100110 gmail.com> writes:
On 3/13/14, 17:43, Walter Bright wrote:
 On 3/13/2014 3:21 PM, Timon Gehr wrote:
 On 03/13/2014 02:32 PM, Steven Schveighoffer wrote:
 The one I would really like to see is logical OR. There is no easy way
 around this, one must come up with convoluted mechanisms that are much
 harder to design, write, and understand than just version(x || y)
version(A) version=AorB; version(B) version=AorB; version(AorB){ }
If you're writing things like that, it's still missing the point. The point is not to find workarounds, but to rethink just what feature is being version'd on. For example, suppose it's wrapping a call to SomeWackyFunction: version (Linux) SomeWackyFunction(); else version (OSX) SomeWackyFunction(); else ... workaround ... Execrable answer: version (Linux) version=LinuxOrOSX; version (OSX) version=LinuxOrOSX; ... version (LinuxOrOSX) SomeWackyFunction(); else ... workaround ... This is execrable because LinuxOrOSX: 1. has no particular relationship to SomeWackyFunction() 2. makes for confusion if LinuxOrOSX is used also to version in other things 3. makes for what do I do if I add in FreeBSD? Rename to LinuxOrOSXOrFreeBSD ? yeech Better answer: version (Linux) version=hasSomeWackyFunction; version (OSX) version=hasSomeWackyFunction; ... version (hasSomeWackyFunction) SomeWackyFunction(); else ... workaround ... At least this is maintainable, though it's clumsy if that code sequence appears more than once or, worse, must be replicated in multiple modules. Even better: -------- import wackyfunctionality; ... WackyFunction(); -------- module wackyfunctionality; void WackyFunction() { version (Linux) SomeWackyFunction(); else version (OSX) SomeWackyFunction(); else ... workaround ... } -------- Simple, maintainable, easy on the eye.
...And code duplication everywhere!
Mar 14 2014
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 3/14/2014 12:34 AM, 1100110 wrote:
 ...And code duplication everywhere!
Actually, very little of that.
Mar 14 2014
parent reply 1100110 <0b1100110 gmail.com> writes:
On 3/14/14, 3:02, Walter Bright wrote:
 On 3/14/2014 12:34 AM, 1100110 wrote:
 ...And code duplication everywhere!
Actually, very little of that.
I don't know what you'd call this then... Exact same bit of code, repeated multiple times for versions which could be OR'd together. version (X86) { enum RTLD_LAZY = 0x00001; enum RTLD_NOW = 0x00002; enum RTLD_GLOBAL = 0x00100; enum RTLD_LOCAL = 0x00000; } else version (X86_64) { enum RTLD_LAZY = 0x00001; enum RTLD_NOW = 0x00002; enum RTLD_GLOBAL = 0x00100; enum RTLD_LOCAL = 0x00000; } else version (MIPS32) { enum RTLD_LAZY = 0x0001; enum RTLD_NOW = 0x0002; enum RTLD_GLOBAL = 0x0004; enum RTLD_LOCAL = 0; } else version (PPC) { enum RTLD_LAZY = 0x00001; enum RTLD_NOW = 0x00002; enum RTLD_GLOBAL = 0x00100; enum RTLD_LOCAL = 0; } else version (PPC64) { enum RTLD_LAZY = 0x00001; enum RTLD_NOW = 0x00002; enum RTLD_GLOBAL = 0x00100; enum RTLD_LOCAL = 0; } else version (ARM) { enum RTLD_LAZY = 0x00001; enum RTLD_NOW = 0x00002; enum RTLD_GLOBAL = 0x00100; enum RTLD_LOCAL = 0; } else version (AArch64) { enum RTLD_LAZY = 0x00001; enum RTLD_NOW = 0x00002; enum RTLD_GLOBAL = 0x00100; enum RTLD_LOCAL = 0; } Yeah there are a few differences, but it would be trivial to collapse this down... Just for funsies: version (X86 || X86_64) { enum RTLD_LAZY = 0x00001; enum RTLD_NOW = 0x00002; enum RTLD_GLOBAL = 0x00100; enum RTLD_LOCAL = 0x00000; } else version (MIPS32) { enum RTLD_LAZY = 0x0001; enum RTLD_NOW = 0x0002; enum RTLD_GLOBAL = 0x0004; enum RTLD_LOCAL = 0; } else version (PPC) { enum RTLD_LAZY = 0x00001; enum RTLD_NOW = 0x00002; enum RTLD_GLOBAL = 0x00100; enum RTLD_LOCAL = 0; } else version (PPC64 || ARM || AArch64) { enum RTLD_LAZY = 0x00001; enum RTLD_NOW = 0x00002; enum RTLD_GLOBAL = 0x00100; enum RTLD_LOCAL = 0; } Oh wait, isn't 0x00000 the same as 0? (I honestly don't know if that matters, but assuming it doesn't...) version (X86 || X86_64 || PPC || PPC64 || ARM || AArch64) { enum RTLD_LAZY = 0x00001; enum RTLD_NOW = 0x00002; enum RTLD_GLOBAL = 0x00100; enum RTLD_LOCAL = 0x00000; } else version (MIPS32) { enum RTLD_LAZY = 0x0001; enum RTLD_NOW = 0x0002; enum RTLD_GLOBAL = 0x0004; enum RTLD_LOCAL = 0; } Huh, for not having any code duplication it sure is a hell of a lot shorter when combined...
Mar 14 2014
next sibling parent Iain Buclaw <ibuclaw gdcproject.org> writes:
On 14 March 2014 08:51, 1100110 <0b1100110 gmail.com> wrote:
 On 3/14/14, 3:02, Walter Bright wrote:
 On 3/14/2014 12:34 AM, 1100110 wrote:
 ...And code duplication everywhere!
Actually, very little of that.
I don't know what you'd call this then... Exact same bit of code, repeated multiple times for versions which could be OR'd together. version (X86) { enum RTLD_LAZY = 0x00001; enum RTLD_NOW = 0x00002; enum RTLD_GLOBAL = 0x00100; enum RTLD_LOCAL = 0x00000; } else version (X86_64) { enum RTLD_LAZY = 0x00001; enum RTLD_NOW = 0x00002; enum RTLD_GLOBAL = 0x00100; enum RTLD_LOCAL = 0x00000; } else version (MIPS32) { enum RTLD_LAZY = 0x0001; enum RTLD_NOW = 0x0002; enum RTLD_GLOBAL = 0x0004; enum RTLD_LOCAL = 0; } else version (PPC) { enum RTLD_LAZY = 0x00001; enum RTLD_NOW = 0x00002; enum RTLD_GLOBAL = 0x00100; enum RTLD_LOCAL = 0; } else version (PPC64) { enum RTLD_LAZY = 0x00001; enum RTLD_NOW = 0x00002; enum RTLD_GLOBAL = 0x00100; enum RTLD_LOCAL = 0; } else version (ARM) { enum RTLD_LAZY = 0x00001; enum RTLD_NOW = 0x00002; enum RTLD_GLOBAL = 0x00100; enum RTLD_LOCAL = 0; } else version (AArch64) { enum RTLD_LAZY = 0x00001; enum RTLD_NOW = 0x00002; enum RTLD_GLOBAL = 0x00100; enum RTLD_LOCAL = 0; } Yeah there are a few differences, but it would be trivial to collapse this down... Just for funsies: version (X86 || X86_64) { enum RTLD_LAZY = 0x00001; enum RTLD_NOW = 0x00002; enum RTLD_GLOBAL = 0x00100; enum RTLD_LOCAL = 0x00000; } else version (MIPS32) { enum RTLD_LAZY = 0x0001; enum RTLD_NOW = 0x0002; enum RTLD_GLOBAL = 0x0004; enum RTLD_LOCAL = 0; } else version (PPC) { enum RTLD_LAZY = 0x00001; enum RTLD_NOW = 0x00002; enum RTLD_GLOBAL = 0x00100; enum RTLD_LOCAL = 0; } else version (PPC64 || ARM || AArch64) { enum RTLD_LAZY = 0x00001; enum RTLD_NOW = 0x00002; enum RTLD_GLOBAL = 0x00100; enum RTLD_LOCAL = 0; } Oh wait, isn't 0x00000 the same as 0? (I honestly don't know if that matters, but assuming it doesn't...) version (X86 || X86_64 || PPC || PPC64 || ARM || AArch64) { enum RTLD_LAZY = 0x00001; enum RTLD_NOW = 0x00002; enum RTLD_GLOBAL = 0x00100; enum RTLD_LOCAL = 0x00000; } else version (MIPS32) { enum RTLD_LAZY = 0x0001; enum RTLD_NOW = 0x0002; enum RTLD_GLOBAL = 0x0004; enum RTLD_LOCAL = 0; } Huh, for not having any code duplication it sure is a hell of a lot shorter when combined...
This is exactly the problem I wanted to avoid in druntime. Someone needs to pull their finger out and decide how we are going to tackle the porting chasm we are heading into.
Mar 14 2014
prev sibling next sibling parent reply "Regan Heath" <regan netmail.co.nz> writes:
On Fri, 14 Mar 2014 08:51:05 -0000, 1100110 <0b1100110 gmail.com> wrote:
      version (X86 || X86_64 || PPC || PPC64 || ARM || AArch64)
      {
          enum RTLD_LAZY = 0x00001;
          enum RTLD_NOW = 0x00002;
          enum RTLD_GLOBAL = 0x00100;
          enum RTLD_LOCAL = 0x00000;
      }
      else version (MIPS32)
      {
          enum RTLD_LAZY = 0x0001;
          enum RTLD_NOW = 0x0002;
          enum RTLD_GLOBAL = 0x0004;
          enum RTLD_LOCAL = 0;
      }
Walter's point, I believe, is that you should define a meaningful version identifier for each specific case, and that this is "better" because then you're less concerned about where it's supported and more concerned with what it is which is/isn't supported. Maintenance is very slightly better too, IMO, because you add/remove/alter a complete line rather than editing a set of || && etc which can in some cases be a little confusing. Basically, the chance of an error is very slightly lower. For example, either this: version(X86) version = MeaningfulVersion version(X86_64) version = MeaningfulVersion version(PPC) version = MeaningfulVersion version(PPC64) version = MeaningfulVersion version(ARM) version = MeaningfulVersion version(AArch64) version = MeaningfulVersion version(MeaningfulVersion) { } else version (MIPS32) { } or this: version (X86) version = MeaningfulVersion version (X86_64) version = MeaningfulVersion version (PPC) version = MeaningfulVersion version (PPC64) version = MeaningfulVersion version (ARM) version = MeaningfulVersion version (AArch64) version = MeaningfulVersion version (MIPS32) version = OtherMeaningfulVersion version (MeaningfulVersion) { } else version (OtherMeaningfulVersion) { } Regan -- Using Opera's revolutionary email client: http://www.opera.com/mail/
Mar 14 2014
next sibling parent reply 1100110 <0b1100110 gmail.com> writes:
On 3/14/14, 4:58, Regan Heath wrote:
 Maintenance is very slightly better too, IMO, because you
 add/remove/alter a complete line rather than editing a set of || && etc
 which can in some cases be a little confusing.  Basically, the chance of
 an error is very slightly lower.

 For example, either this:

 version(X86) version = MeaningfulVersion
 version(X86_64) version = MeaningfulVersion
 version(PPC) version = MeaningfulVersion
 version(PPC64) version = MeaningfulVersion
 version(ARM) version = MeaningfulVersion
 version(AArch64) version = MeaningfulVersion

 version(MeaningfulVersion)
 {
 }
 else version (MIPS32)
 {
 }

 or this:

 version (X86) version = MeaningfulVersion
 version (X86_64) version = MeaningfulVersion
 version (PPC) version = MeaningfulVersion
 version (PPC64) version = MeaningfulVersion
 version (ARM) version = MeaningfulVersion
 version (AArch64) version = MeaningfulVersion

 version (MIPS32) version = OtherMeaningfulVersion

 version (MeaningfulVersion)
 {
 }
 else version (OtherMeaningfulVersion)
 {
 }

 Regan
...I can't even begin to describe how much more readable the OR'd version is.
Mar 14 2014
parent reply "Regan Heath" <regan netmail.co.nz> writes:
On Fri, 14 Mar 2014 10:22:40 -0000, 1100110 <0b1100110 gmail.com> wrote:

 On 3/14/14, 4:58, Regan Heath wrote:
 Maintenance is very slightly better too, IMO, because you
 add/remove/alter a complete line rather than editing a set of || && etc
 which can in some cases be a little confusing.  Basically, the chance of
 an error is very slightly lower.

 For example, either this:

 version(X86) version = MeaningfulVersion
 version(X86_64) version = MeaningfulVersion
 version(PPC) version = MeaningfulVersion
 version(PPC64) version = MeaningfulVersion
 version(ARM) version = MeaningfulVersion
 version(AArch64) version = MeaningfulVersion

 version(MeaningfulVersion)
 {
 }
 else version (MIPS32)
 {
 }

 or this:

 version (X86) version = MeaningfulVersion
 version (X86_64) version = MeaningfulVersion
 version (PPC) version = MeaningfulVersion
 version (PPC64) version = MeaningfulVersion
 version (ARM) version = MeaningfulVersion
 version (AArch64) version = MeaningfulVersion

 version (MIPS32) version = OtherMeaningfulVersion

 version (MeaningfulVersion)
 {
 }
 else version (OtherMeaningfulVersion)
 {
 }

 Regan
...I can't even begin to describe how much more readable the OR'd version is.
It's shorter, but shorter does not mean more "readable".. if by readable you mean include the ability to communicate intent etc. Add to that, that readable is just one metric. Walter's point is that the above pattern is better at communicating intent, clarifying your logic, and making the resulting version statements easier to understand (aka "more readable") R -- Using Opera's revolutionary email client: http://www.opera.com/mail/
Mar 14 2014
next sibling parent reply Mike Parker <aldacron gmail.com> writes:
On 3/14/2014 10:21 PM, Regan Heath wrote:
 On Fri, 14 Mar 2014 10:22:40 -0000, 1100110 <0b1100110 gmail.com> wrote:

 On 3/14/14, 4:58, Regan Heath wrote:
 Maintenance is very slightly better too, IMO, because you
 add/remove/alter a complete line rather than editing a set of || && etc
 which can in some cases be a little confusing.  Basically, the chance of
 an error is very slightly lower.

 For example, either this:

 version(X86) version = MeaningfulVersion
 version(X86_64) version = MeaningfulVersion
 version(PPC) version = MeaningfulVersion
 version(PPC64) version = MeaningfulVersion
 version(ARM) version = MeaningfulVersion
 version(AArch64) version = MeaningfulVersion

 version(MeaningfulVersion)
 {
 }
 else version (MIPS32)
 {
 }

 or this:

 version (X86) version = MeaningfulVersion
 version (X86_64) version = MeaningfulVersion
 version (PPC) version = MeaningfulVersion
 version (PPC64) version = MeaningfulVersion
 version (ARM) version = MeaningfulVersion
 version (AArch64) version = MeaningfulVersion

 version (MIPS32) version = OtherMeaningfulVersion

 version (MeaningfulVersion)
 {
 }
 else version (OtherMeaningfulVersion)
 {
 }

 Regan
...I can't even begin to describe how much more readable the OR'd version is.
It's shorter, but shorter does not mean more "readable".. if by readable you mean include the ability to communicate intent etc. Add to that, that readable is just one metric. Walter's point is that the above pattern is better at communicating intent, clarifying your logic, and making the resulting version statements easier to understand (aka "more readable") R
For me, the issue is that this has to go in *every* module that needs MeaningfulVersion and OtherMeaningfulVersion to be defined. That's a lot more points to track than a single line of ORed versions in each module. When I do need something like this, I just define some manifest constants in a config module, import that everywhere, and use static if. But that feels very much like the workaround it is. I would much prefer to have boolean versions. The bottom line is that I'm doing exactly what Walter apparently doesn't want me to do anyway, just with the added annoyance of importing an extra module to get it done.
Mar 14 2014
next sibling parent reply Johannes Pfau <nospam example.com> writes:
Am Fri, 14 Mar 2014 23:34:46 +0900
schrieb Mike Parker <aldacron gmail.com>:

 When I do need something like this, I just define some manifest 
 constants in a config module, import that everywhere, and use static
 if. But that feels very much like the workaround it is. I would much
 prefer to have boolean versions. The bottom line is that I'm doing
 exactly what Walter apparently doesn't want me to do anyway, just
 with the added annoyance of importing an extra module to get it done.
I use manifest constants instead of version identifiers as well. If a version identifier affects the public API/ABI of a library, then the library and all code using the library always have to be compiled with the same version switches(inlining and templates make this an even bigger problem). This is not only inconvenient, it's also easy to think of examples where the problem will only show up as crashes at runtime. The only reason why that's not an issue in phobos/druntime is that we only use compiler defined versions there, but user defined versions are almost unusable.
Mar 14 2014
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 3/14/2014 10:26 AM, Johannes Pfau wrote:
 I use manifest constants instead of version identifiers as well. If a
 version identifier affects the public API/ABI of a library, then the
 library and all code using the library always have to be compiled with
 the same version switches(inlining and templates make this an even
 bigger problem). This is not only inconvenient, it's also easy to think
 of examples where the problem will only show up as crashes at runtime.
 The only reason why that's not an issue in phobos/druntime is that we
 only use compiler defined versions there, but user defined versions are
 almost unusable.
Use this method: -------- import wackyfunctionality; ... WackyFunction(); -------- module wackyfunctionality; void WackyFunction() { version (Linux) SomeWackyFunction(); else version (OSX) SomeWackyFunction(); else ... workaround ... } --------
Mar 14 2014
next sibling parent reply Iain Buclaw <ibuclaw gdcproject.org> writes:
On 14 March 2014 17:53, Walter Bright <newshound2 digitalmars.com> wrote:
 On 3/14/2014 10:26 AM, Johannes Pfau wrote:
 I use manifest constants instead of version identifiers as well. If a
 version identifier affects the public API/ABI of a library, then the
 library and all code using the library always have to be compiled with
 the same version switches(inlining and templates make this an even
 bigger problem). This is not only inconvenient, it's also easy to think
 of examples where the problem will only show up as crashes at runtime.
 The only reason why that's not an issue in phobos/druntime is that we
 only use compiler defined versions there, but user defined versions are
 almost unusable.
Use this method: -------- import wackyfunctionality; ... WackyFunction(); -------- module wackyfunctionality; void WackyFunction() { version (Linux) SomeWackyFunction(); else version (OSX) SomeWackyFunction(); else ... workaround ... } --------
Some years down the line (and some platform testing) turns into: -------- module wackyfunctionality; void WackyFunction() { version (Linux) { version (ARM) _SomeWackyFunction(); else version (MIPS) MIPS_SomeWackyFunction(); else version (X86) SomeWackyFunction(); else version (X86_64) SomeWackyFunction(); else ... should be some wacky function, but workaround for general case ... } else version (OSX) { version (PPC) iSomeWackyFunction(); else SomeWackyFunction(); // In hope there's no other Apple hardware. } else version (OpenBSD) { /// Blah } else version (Haiku) { /// Blah } else ... workaround ... } --------
Mar 14 2014
next sibling parent reply Paulo Pinto <pjmlp progtools.org> writes:
Am 14.03.2014 19:06, schrieb Iain Buclaw:
 On 14 March 2014 17:53, Walter Bright <newshound2 digitalmars.com> wrote:
 On 3/14/2014 10:26 AM, Johannes Pfau wrote:
 I use manifest constants instead of version identifiers as well. If a
 version identifier affects the public API/ABI of a library, then the
 library and all code using the library always have to be compiled with
 the same version switches(inlining and templates make this an even
 bigger problem). This is not only inconvenient, it's also easy to think
 of examples where the problem will only show up as crashes at runtime.
 The only reason why that's not an issue in phobos/druntime is that we
 only use compiler defined versions there, but user defined versions are
 almost unusable.
Use this method: -------- import wackyfunctionality; ... WackyFunction(); -------- module wackyfunctionality; void WackyFunction() { version (Linux) SomeWackyFunction(); else version (OSX) SomeWackyFunction(); else ... workaround ... } --------
Some years down the line (and some platform testing) turns into: -------- module wackyfunctionality; void WackyFunction() { version (Linux) { version (ARM) _SomeWackyFunction(); else version (MIPS) MIPS_SomeWackyFunction(); else version (X86) SomeWackyFunction(); else version (X86_64) SomeWackyFunction(); else ... should be some wacky function, but workaround for general case ... } else version (OSX) { version (PPC) iSomeWackyFunction(); else SomeWackyFunction(); // In hope there's no other Apple hardware. } else version (OpenBSD) { /// Blah } else version (Haiku) { /// Blah } else ... workaround ... } --------
That is why the best approach is to have one module per platform specific code, with a common interface defined in .di file. Back on my C/C++ days at work, any conditional code would be killed by me during code reviews. -- Paulo
Mar 14 2014
next sibling parent reply "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Fri, Mar 14, 2014 at 07:29:27PM +0100, Paulo Pinto wrote:
 Am 14.03.2014 19:06, schrieb Iain Buclaw:
On 14 March 2014 17:53, Walter Bright <newshound2 digitalmars.com> wrote:
On 3/14/2014 10:26 AM, Johannes Pfau wrote:
I use manifest constants instead of version identifiers as well. If
a version identifier affects the public API/ABI of a library, then
the library and all code using the library always have to be
compiled with the same version switches(inlining and templates make
this an even bigger problem). This is not only inconvenient, it's
also easy to think of examples where the problem will only show up
as crashes at runtime.  The only reason why that's not an issue in
phobos/druntime is that we only use compiler defined versions
there, but user defined versions are almost unusable.
Use this method: -------- import wackyfunctionality; ... WackyFunction(); -------- module wackyfunctionality; void WackyFunction() { version (Linux) SomeWackyFunction(); else version (OSX) SomeWackyFunction(); else ... workaround ... } --------
Some years down the line (and some platform testing) turns into: -------- module wackyfunctionality; void WackyFunction() { version (Linux) { version (ARM) _SomeWackyFunction(); else version (MIPS) MIPS_SomeWackyFunction(); else version (X86) SomeWackyFunction(); else version (X86_64) SomeWackyFunction(); else ... should be some wacky function, but workaround for general case ... } else version (OSX) { version (PPC) iSomeWackyFunction(); else SomeWackyFunction(); // In hope there's no other Apple hardware. } else version (OpenBSD) { /// Blah } else version (Haiku) { /// Blah } else ... workaround ... } --------
That is why the best approach is to have one module per platform specific code, with a common interface defined in .di file.
+1. Once versioned code gets more than 2 levels deep, it becomes an unreadable mess. The .di approach is much more manageable.
 Back on my C/C++ days at work, any conditional code would be killed
 by me during code reviews.
[...] Ah, how I wish I could do that... over here at my job, parts of the code are a nasty rats'-nest of #if's, #ifdef's, #ifndef's, and "functions" that aren't defined anywhere (they are generated by macros, including their names!). It used to be relatively sane while the project still remained a single project... Unfortunately, about a year or so ago, the PTBs decided to merge another project into this one, and by "merge" they meant, graft the source tree of the other project into this one, hack it with a hacksaw until it compiles, then call it a day. We've been suffering from the resulting schizophrenic code ever since, where some files are compiled when configuring for platform A, and skipped over and some other files are compiled when configuring for platform B (often containing conflicting functions of the same name but with incompatible parameters), and a ton of #if's and #ifdef's nested to the n'th level got sprinkled everywhere in the common code in order to glue the schizophrenic mess into one piece. One time, I spent almost an hour debugging some code that turned out to be inside an #if 0 ... #endif block. >:-( (The real code had been moved elsewhere, you see, and whoever moved the code "kindly" decided to leave the old copy in the original file inside an #if 0 block, "for reference", whatever that means. Then silly old me came along expecting the code to still be in the old place, and sure enough it was -- except that unbeknownst to me it's now inside an #if 0 block. Gah!) T -- People tell me that I'm paranoid, but they're just out to get me.
Mar 14 2014
parent reply Paulo Pinto <pjmlp progtools.org> writes:
Am 14.03.2014 19:50, schrieb H. S. Teoh:
 On Fri, Mar 14, 2014 at 07:29:27PM +0100, Paulo Pinto wrote:
 Am 14.03.2014 19:06, schrieb Iain Buclaw:
 On 14 March 2014 17:53, Walter Bright <newshound2 digitalmars.com> wrote:
 On 3/14/2014 10:26 AM, Johannes Pfau wrote:
 I use manifest constants instead of version identifiers as well. If
 a version identifier affects the public API/ABI of a library, then
 the library and all code using the library always have to be
 compiled with the same version switches(inlining and templates make
 this an even bigger problem). This is not only inconvenient, it's
 also easy to think of examples where the problem will only show up
 as crashes at runtime.  The only reason why that's not an issue in
 phobos/druntime is that we only use compiler defined versions
 there, but user defined versions are almost unusable.
Use this method: -------- import wackyfunctionality; ... WackyFunction(); -------- module wackyfunctionality; void WackyFunction() { version (Linux) SomeWackyFunction(); else version (OSX) SomeWackyFunction(); else ... workaround ... } --------
Some years down the line (and some platform testing) turns into: -------- module wackyfunctionality; void WackyFunction() { version (Linux) { version (ARM) _SomeWackyFunction(); else version (MIPS) MIPS_SomeWackyFunction(); else version (X86) SomeWackyFunction(); else version (X86_64) SomeWackyFunction(); else ... should be some wacky function, but workaround for general case ... } else version (OSX) { version (PPC) iSomeWackyFunction(); else SomeWackyFunction(); // In hope there's no other Apple hardware. } else version (OpenBSD) { /// Blah } else version (Haiku) { /// Blah } else ... workaround ... } --------
That is why the best approach is to have one module per platform specific code, with a common interface defined in .di file.
+1. Once versioned code gets more than 2 levels deep, it becomes an unreadable mess. The .di approach is much more manageable.
 Back on my C/C++ days at work, any conditional code would be killed
 by me during code reviews.
[...] Ah, how I wish I could do that... over here at my job, parts of the code are a nasty rats'-nest of #if's, #ifdef's, #ifndef's, and "functions" that aren't defined anywhere (they are generated by macros, including their names!). It used to be relatively sane while the project still remained a single project... Unfortunately, about a year or so ago, the PTBs decided to merge another project into this one, and by "merge" they meant, graft the source tree of the other project into this one, hack it with a hacksaw until it compiles, then call it a day. We've been suffering from the resulting schizophrenic code ever since, where some files are compiled when configuring for platform A, and skipped over and some other files are compiled when configuring for platform B (often containing conflicting functions of the same name but with incompatible parameters), and a ton of #if's and #ifdef's nested to the n'th level got sprinkled everywhere in the common code in order to glue the schizophrenic mess into one piece. One time, I spent almost an hour debugging some code that turned out to be inside an #if 0 ... #endif block. >:-( (The real code had been moved elsewhere, you see, and whoever moved the code "kindly" decided to leave the old copy in the original file inside an #if 0 block, "for reference", whatever that means. Then silly old me came along expecting the code to still be in the old place, and sure enough it was -- except that unbeknownst to me it's now inside an #if 0 block. Gah!) T
Ouch! I feel your pain. This type of experience is what lead me to fight #ifdef spaghetti code. -- Paulo
Mar 14 2014
next sibling parent reply "monarch_dodra" <monarchdodra gmail.com> writes:
On Friday, 14 March 2014 at 20:45:35 UTC, Paulo Pinto wrote:
 Am 14.03.2014 19:50, schrieb H. S. Teoh:
 The real code had been moved elsewhere, you see, and
 whoever moved the code "kindly" decided to leave the old copy 
 in the
 original file inside an #if 0 block, "for reference", whatever 
 that
 means. Then silly old me came along expecting the code to 
 still be in
 the old place, and sure enough it was -- except that 
 unbeknownst to me
 it's now inside an #if 0 block. Gah!)


 T
Ouch! I feel your pain. This type of experience is what lead me to fight #ifdef spaghetti code. -- Paulo
I hate code "commented out" in an "#if 0" with a passion. Just... Why?
Mar 14 2014
next sibling parent "deadalnix" <deadalnix gmail.com> writes:
On Friday, 14 March 2014 at 20:51:09 UTC, monarch_dodra wrote:
 On Friday, 14 March 2014 at 20:45:35 UTC, Paulo Pinto wrote:
 Am 14.03.2014 19:50, schrieb H. S. Teoh:
 The real code had been moved elsewhere, you see, and
 whoever moved the code "kindly" decided to leave the old copy 
 in the
 original file inside an #if 0 block, "for reference", 
 whatever that
 means. Then silly old me came along expecting the code to 
 still be in
 the old place, and sure enough it was -- except that 
 unbeknownst to me
 it's now inside an #if 0 block. Gah!)


 T
Ouch! I feel your pain. This type of experience is what lead me to fight #ifdef spaghetti code. -- Paulo
I hate code "commented out" in an "#if 0" with a passion. Just... Why?
Because one does not know how to use git. PS: This sarcastic note also exists with mercurial flavor.
Mar 14 2014
prev sibling parent reply Michel Fortin <michel.fortin michelf.ca> writes:
On 2014-03-14 20:51:08 +0000, "monarch_dodra" <monarchdodra gmail.com> said:

 I hate code "commented out" in an "#if 0" with a passion. Just... Why?
Better this: #if 0 ... #else ... #endif than this: /* ... /*/ ... //*/ -- Michel Fortin michel.fortin michelf.ca http://michelf.ca
Mar 14 2014
parent "deadalnix" <deadalnix gmail.com> writes:
On Saturday, 15 March 2014 at 00:00:48 UTC, Michel Fortin wrote:
 On 2014-03-14 20:51:08 +0000, "monarch_dodra" 
 <monarchdodra gmail.com> said:

 I hate code "commented out" in an "#if 0" with a passion. 
 Just... Why?
Better this: #if 0 ... #else ... #endif than this: /* ... /*/ ... //*/
/+ ... /*+//*//+*/ ... //+/
Mar 14 2014
prev sibling parent "Joakim" <joakim airpost.net> writes:
On Friday, 14 March 2014 at 20:45:35 UTC, Paulo Pinto wrote:
 Am 14.03.2014 19:50, schrieb H. S. Teoh:
 On Fri, Mar 14, 2014 at 07:29:27PM +0100, Paulo Pinto wrote:
 Am 14.03.2014 19:06, schrieb Iain Buclaw:
 On 14 March 2014 17:53, Walter Bright 
 <newshound2 digitalmars.com> wrote:
 On 3/14/2014 10:26 AM, Johannes Pfau wrote:
--snip---
 +1. Once versioned code gets more than 2 levels deep, it 
 becomes an
 unreadable mess. The .di approach is much more manageable.


 Back on my C/C++ days at work, any conditional code would be 
 killed
 by me during code reviews.
[...] Ah, how I wish I could do that... over here at my job, parts of the code are a nasty rats'-nest of #if's, #ifdef's, #ifndef's, and "functions" that aren't defined anywhere (they are generated by macros, including their names!). It used to be relatively sane while the project still remained a single project... Unfortunately, about a year or so ago, the PTBs decided to merge another project into this one, and by "merge" they meant, graft the source tree of the other project into this one, hack it with a hacksaw until it compiles, then call it a day. We've been suffering from the resulting schizophrenic code ever since, where some files are compiled when configuring for platform A, and skipped over and some other files are compiled when configuring for platform B (often containing conflicting functions of the same name but with incompatible parameters), and a ton of #if's and #ifdef's nested to the n'th level got sprinkled everywhere in the common code in order to glue the schizophrenic mess into one piece. One time, I spent almost an hour debugging some code that turned out to be inside an #if 0 ... #endif block. >:-( (The real code had been moved elsewhere, you see, and whoever moved the code "kindly" decided to leave the old copy in the original file inside an #if 0 block, "for reference", whatever that means. Then silly old me came along expecting the code to still be in the old place, and sure enough it was -- except that unbeknownst to me it's now inside an #if 0 block. Gah!) T
Ouch! I feel your pain. This type of experience is what lead me to fight #ifdef spaghetti code. -- Paulo
Yeah, having had to deal with macro spaghetti when porting code to new platforms, I completely agree with Walter on this one. Whatever small inconveniences are caused by not allowing any logic inside or with version checks is made up for many times over in clarity and maintenance down the line.
Mar 14 2014
prev sibling next sibling parent Iain Buclaw <ibuclaw gdcproject.org> writes:
On 14 Mar 2014 18:30, "Paulo Pinto" <pjmlp progtools.org> wrote:
 Am 14.03.2014 19:06, schrieb Iain Buclaw:
 On 14 March 2014 17:53, Walter Bright <newshound2 digitalmars.com> wrote:
 On 3/14/2014 10:26 AM, Johannes Pfau wrote:
 I use manifest constants instead of version identifiers as well. If a
 version identifier affects the public API/ABI of a library, then the
 library and all code using the library always have to be compiled with
 the same version switches(inlining and templates make this an even
 bigger problem). This is not only inconvenient, it's also easy to think
 of examples where the problem will only show up as crashes at runtime.
 The only reason why that's not an issue in phobos/druntime is that we
 only use compiler defined versions there, but user defined versions are
 almost unusable.
Use this method: -------- import wackyfunctionality; ... WackyFunction(); -------- module wackyfunctionality; void WackyFunction() { version (Linux) SomeWackyFunction(); else version (OSX) SomeWackyFunction(); else ... workaround ... } --------
Some years down the line (and some platform testing) turns into: -------- module wackyfunctionality; void WackyFunction() { version (Linux) { version (ARM) _SomeWackyFunction(); else version (MIPS) MIPS_SomeWackyFunction(); else version (X86) SomeWackyFunction(); else version (X86_64) SomeWackyFunction(); else ... should be some wacky function, but workaround for general
case ...
      }
      else version (OSX) {
          version (PPC)
             iSomeWackyFunction();
          else
             SomeWackyFunction();   // In hope there's no other Apple
hardware.
      }
      else version (OpenBSD) {
        /// Blah
      }
      else version (Haiku) {
        /// Blah
      }
      else
          ... workaround ...
 }
 --------
That is why the best approach is to have one module per platform specific
code, with a common interface defined in .di file.

Don't tell me, tell the druntime maintainers.  :)
Mar 14 2014
prev sibling parent reply Johannes Pfau <nospam example.com> writes:
Am Fri, 14 Mar 2014 19:29:27 +0100
schrieb Paulo Pinto <pjmlp progtools.org>:

 That is why the best approach is to have one module per platform 
 specific code, with a common interface defined in .di file.
Which is basically what Iain proposed for druntime. Then the thread got hijacked and talked about three different issues in the end. Walter answered to the other issues, but not to Iain's original request, Andrei agreed with Walter, the discussion ended, pull request closed and nothing will happen ;-) https://github.com/D-Programming-Language/druntime/pull/731 https://github.com/D-Programming-Language/druntime/pull/732 I think we'll have to revisit this at some point, but right now there's other stuff to be done...
Mar 15 2014
parent reply Iain Buclaw <ibuclaw gdcproject.org> writes:
On 15 Mar 2014 13:45, "Johannes Pfau" <nospam example.com> wrote:
 Am Fri, 14 Mar 2014 19:29:27 +0100
 schrieb Paulo Pinto <pjmlp progtools.org>:

 That is why the best approach is to have one module per platform
 specific code, with a common interface defined in .di file.
Which is basically what Iain proposed for druntime. Then the thread got hijacked and talked about three different issues in the end. Walter answered to the other issues, but not to Iain's original request, Andrei agreed with Walter, the discussion ended, pull request closed and nothing will happen ;-) https://github.com/D-Programming-Language/druntime/pull/731 https://github.com/D-Programming-Language/druntime/pull/732 I think we'll have to revisit this at some point, but right now there's other stuff to be done...
Indeed other stuff needs to be done, it just so happens that thanks to sys.posix's bad design splitting out other modules into ports will be more a pain. But it shows how *no one* in that thread who responded either against the first pull, or went off and hijacked the second had a Scooby about the issue being addressed. Didn't even have the curiosity to give alternate suggestions.
Mar 16 2014
next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 3/16/2014 1:04 AM, Iain Buclaw wrote:
 Indeed other stuff needs to be done, it just so happens that thanks to
 sys.posix's bad design splitting out other modules into ports will be more a
 pain.  But it shows how *no one* in that thread who responded either against
the
 first pull, or went off and hijacked the second had a Scooby about the issue
 being addressed.  Didn't even have the curiosity to give alternate suggestions.
The 731 pull added more files under the old package layout. The 732 added more files to the config system, which I objected to. I believe my comments were apropos and suggested a better package structure than the one in the PR's.
Mar 16 2014
parent reply Iain Buclaw <ibuclaw gdcproject.org> writes:
On 17 Mar 2014 00:05, "Walter Bright" <newshound2 digitalmars.com> wrote:
 On 3/16/2014 1:04 AM, Iain Buclaw wrote:
 Indeed other stuff needs to be done, it just so happens that thanks to
 sys.posix's bad design splitting out other modules into ports will be
more a
 pain.  But it shows how *no one* in that thread who responded either
against the
 first pull, or went off and hijacked the second had a Scooby about the
issue
 being addressed.  Didn't even have the curiosity to give alternate
suggestions.
 The 731 pull added more files under the old package layout.
I acknowledged that in the original PR comments. I said it wasn't ideal before you commented. I had made a change after you commented to make things a little more ideal. But came across problems as described in my previous message above when I tried to do the same with more modules. No one really gave any feedback I could work with. But 731 is the idea that people I've spoken to agree with (at least when they say separate files they make no reference to packaging it), and no one has contended it in 11666 either. It just needs some direction when it comes to actually doing it, and I feel the two showstoppers are the sys.linux/sys.windows lark, and the absence of any configure in the build system.
 The 732 added more files to the config system, which I objected to.
Better than creating a new ports namespace. But at least I toyed round the idea. It seems sound to move things to packages and have version (X86) public import x86stuff; version (ARM) public import armstuff; But it just doesn't scale beyond a few files, and I think I showed that through the PR, and I'm satisfied that it didn't succeed and that became the logical conclusion. Everyone in the conversation however never once used the words ARM, MIPS, PPC, X86... The strange fixation on the word POSIX had me scratching my head all the way through.
Mar 17 2014
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 3/17/2014 12:23 AM, Iain Buclaw wrote:
 No one really gave any feedback I could work with.
I'll take the files you created and simply do it to show what it looks like. I infer I've completely failed at explaining it otherwise.
Mar 17 2014
parent reply Iain Buclaw <ibuclaw gdcproject.org> writes:
On 17 Mar 2014 07:40, "Walter Bright" <newshound2 digitalmars.com> wrote:
 On 3/17/2014 12:23 AM, Iain Buclaw wrote:
 No one really gave any feedback I could work with.
I'll take the files you created and simply do it to show what it looks
like. I infer I've completely failed at explaining it otherwise.

If it's in any relation to your comments in the PR, my opinion is that they
are irrelevant to to PR in question, but they *are* relevant in their own
right and warrant a new bug/PR to be raised.
Mar 17 2014
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 3/17/2014 12:53 AM, Iain Buclaw wrote:
 If it's in any relation to your comments in the PR, my opinion is that they are
 irrelevant to to PR in question, but they *are* relevant in their own right and
 warrant a new bug/PR to be raised.
Here it is: https://github.com/D-Programming-Language/druntime/pull/741 I think it shows it is very relevant to your PR, as in fact I included your files essentially verbatim, I just changed the package layout.
Mar 17 2014
parent reply "Sean Kelly" <sean invisibleduck.org> writes:
On Monday, 17 March 2014 at 10:25:26 UTC, Walter Bright wrote:
 On 3/17/2014 12:53 AM, Iain Buclaw wrote:
 If it's in any relation to your comments in the PR, my opinion 
 is that they are
 irrelevant to to PR in question, but they *are* relevant in 
 their own right and
 warrant a new bug/PR to be raised.
Here it is: https://github.com/D-Programming-Language/druntime/pull/741 I think it shows it is very relevant to your PR, as in fact I included your files essentially verbatim, I just changed the package layout.
So I'd import "core.sys.ucontext.package" if I didn't want a system-specific module (which should be always)? Why this approach and not publishing modules from somewhere into core.sys on install?
Mar 17 2014
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 3/17/2014 12:18 PM, Sean Kelly wrote:
 So I'd import "core.sys.ucontext.package" if I didn't want a
 system-specific module (which should be always)?
No, import core.sys.ucontext; Yes, ucontext is a directory. The package.d is a magic file name. This is the new package design that was incorporated last year, designed specifically to allow single imports to replaced by packages without affecting user code.
 Why this approach and not publishing modules from somewhere into core.sys
 on install?
The short answer is I happen to have a fondness for installs that are simple directory copies that do not modify/add/remove files. I think we are safely beyond the days that even a few hundred extra files installed on the disk are a negative. Even if the non-used platform packages are simply deleted on install, this will not affect compilation. I think that's still better than modifying files. I've never trusted installers that edited files. Besides that, there are other strong reasons for this approach: 1. New platforms can be added without affecting user code. 2. New platforms can be added without touching files for other platforms. 3. User follows simple no-brainer rule when looking at OS documentation: #include <ucontext.h> rewrites to: import core.sys.ucontext; 4. Bugs in particular platform support files can be fixed without concern with breaking other platforms. 5. Changes in platform support will not touch files for other platforms, greatly simplifying the QA review process. 6. D installed on Platform X can be mounted as a remote drive and used to compile for Platform Y.
Mar 17 2014
parent reply "Sean Kelly" <sean invisibleduck.org> writes:
On Monday, 17 March 2014 at 19:42:47 UTC, Walter Bright wrote:
 On 3/17/2014 12:18 PM, Sean Kelly wrote:
 So I'd import "core.sys.ucontext.package" if I didn't want a
 system-specific module (which should be always)?
No, import core.sys.ucontext; Yes, ucontext is a directory. The package.d is a magic file name. This is the new package design that was incorporated last year, designed specifically to allow single imports to replaced by packages without affecting user code.
Ah. I suspected this might be the case and searched the language docs before posting, but couldn't find any mention of this so I thought I'd ask. I like the idea of a file per platform, and am undecided whether I prefer this or the publishing solution. This one sounds more flexible, but it may be more difficult to produce installs that contain only the files relevant to some particular platform.
Mar 17 2014
parent Walter Bright <newshound2 digitalmars.com> writes:
On 3/17/2014 1:23 PM, Sean Kelly wrote:
 I like the idea of a file per platform, and am undecided whether
 I prefer this or the publishing solution.  This one sounds more
 flexible, but it may be more difficult to produce installs that
 contain only the files relevant to some particular platform.
At the worst, you could do a recursive delete on freebsd.d, etc. :-) Note that even with all these platform files, it only adds one extra file to the compilation process: package.d and the platform specific file.
Mar 17 2014
prev sibling parent "Sean Kelly" <sean invisibleduck.org> writes:
On Sunday, 16 March 2014 at 08:04:24 UTC, Iain Buclaw wrote:
 Indeed other stuff needs to be done, it just so happens that  
 thanks to
 sys.posix's bad design splitting out other modules into ports  
 will be more
 a pain.  But it shows how *no one* in that thread who responded
  either
 against the first pull, or went off and hijacked the second had
  a Scooby
 about the issue being addressed.  Didn't even have the 
 curiosity to give
 alternate suggestions.
Pretty sure I agreed with your motivation here, though I figured I'd defer the design to someone who has experience actually dealing with this many ports.
Mar 17 2014
prev sibling parent reply "Jacob Carlborg" <doob me.com> writes:
On Friday, 14 March 2014 at 18:06:47 UTC, Iain Buclaw wrote:

     else version (OSX) {
         version (PPC)
            iSomeWackyFunction();
         else
            SomeWackyFunction();   // In hope there's no other 
 Apple hardware.
There's also ARM, ARM64, x86 32bit and PPC64. -- /Jacob Carlborg
Mar 15 2014
next sibling parent Iain Buclaw <ibuclaw gdcproject.org> writes:
On 15 Mar 2014 09:44, "Jacob Carlborg" <doob me.com> wrote:
 On Friday, 14 March 2014 at 18:06:47 UTC, Iain Buclaw wrote:

     else version (OSX) {
         version (PPC)
            iSomeWackyFunction();
         else
            SomeWackyFunction();   // In hope there's no other Apple
hardware.
 There's also ARM, ARM64, x86 32bit and PPC64.

 --
 /Jacob Carlborg
Wonderful - so the OSX bindings in druntime are pretty much in a dia state for someone who wishes to port to a non-X86 architecture? I know the BSD and Solaris code needs fixing up and testing.
Mar 15 2014
prev sibling parent Walter Bright <newshound2 digitalmars.com> writes:
On 3/15/2014 2:22 AM, Jacob Carlborg wrote:
 On Friday, 14 March 2014 at 18:06:47 UTC, Iain Buclaw wrote:

     else version (OSX) {
         version (PPC)
            iSomeWackyFunction();
         else
            SomeWackyFunction();   // In hope there's no other Apple hardware.
There's also ARM, ARM64, x86 32bit and PPC64.
Right, there should not be else clauses with the word "hope" in them. They should be "static assert(0);" or else be portable.
Mar 15 2014
prev sibling parent reply Johannes Pfau <nospam example.com> writes:
Am Fri, 14 Mar 2014 10:53:59 -0700
schrieb Walter Bright <newshound2 digitalmars.com>:

 On 3/14/2014 10:26 AM, Johannes Pfau wrote:
 I use manifest constants instead of version identifiers as well. If
 a version identifier affects the public API/ABI of a library, then
 the library and all code using the library always have to be
 compiled with the same version switches(inlining and templates make
 this an even bigger problem). This is not only inconvenient, it's
 also easy to think of examples where the problem will only show up
 as crashes at runtime. The only reason why that's not an issue in
 phobos/druntime is that we only use compiler defined versions
 there, but user defined versions are almost unusable.
Use this method: -------- import wackyfunctionality; ... WackyFunction(); -------- module wackyfunctionality; void WackyFunction() { version (Linux) SomeWackyFunction(); else version (OSX) SomeWackyFunction(); else ... workaround ... } --------
I meant really 'custom versions', not OS-related. For example cairoD wraps the cairo C library. cairo can be compiled without or with PNG support. Historically cairoD used version(CAIRO_HAS_PNG_SUPPORT) for this. Then in cairo.d version(CAIRO_HAS_PNG_SUPPORT) { extern(C) int cairo_save_png(char* x); void savePNG(string x){cairo_save_png(toStringz(x))}; } now I have to use version=CAIRO_HAS_PNG_SUPPORT when compiling cairoD, but every user of cairoD also has to use version=CAIRO_HAS_PNG_SUPPORT or the compiler will hide the savePNG functions. There are also examples where not using the same version= switches causes runtime crashes. Compiler defined version(linux, OSX) are explicitly not affected by this issue as they are always defined by the compiler for all modules.
Mar 15 2014
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 3/15/2014 6:44 AM, Johannes Pfau wrote:
 Then in cairo.d
 version(CAIRO_HAS_PNG_SUPPORT)
 {
     extern(C) int cairo_save_png(char* x);
     void savePNG(string x){cairo_save_png(toStringz(x))};
 }
try adding: else { void savePNG(string x) { } } and then your users can just call savePNG without checking the version.
Mar 16 2014
next sibling parent reply Iain Buclaw <ibuclaw gdcproject.org> writes:
On 17 Mar 2014 01:25, "Walter Bright" <newshound2 digitalmars.com> wrote:
 On 3/15/2014 6:44 AM, Johannes Pfau wrote:
 Then in cairo.d
 version(CAIRO_HAS_PNG_SUPPORT)
 {
     extern(C) int cairo_save_png(char* x);
     void savePNG(string x){cairo_save_png(toStringz(x))};
 }
try adding: else { void savePNG(string x) { } } and then your users can just call savePNG without checking the version.
If I recall, he was saying that you must pass -fversion=CAIRO_HAS_PNG_SUPPORT to every file that imports it was the problem, because you want PNG support, not stubs. It's more an example where you need a build system in place for a simple hello world in cairoD if you don't want to be typing too much just to get your test program built. :)
Mar 17 2014
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 3/17/2014 12:39 AM, Iain Buclaw wrote:
 If I recall, he was saying that you must pass -fversion=CAIRO_HAS_PNG_SUPPORT
to
 every file that imports it was the problem, because you want PNG support, not
stubs.
Stub out the functions for PNG support. Then just call them, they will do nothing if PNG isn't supported. There is NO NEED for the callers to set version.
 It's more an example where you need a build system in place for a simple hello
 world in cairoD if you don't want to be typing too much just to get your test
 program built. :)
If you need to -fversion=CAIRO_HAS_PNG_SUPPORT for every file that imports it, you have completely misunderstood the design I suggested. 1. Encapsulate the feature in a function. 2. Implement the function in module X. Module X is the ONLY module that needs the version. In module X, define the function to do nothing if version is false. 3. Nobody who imports X has to define the version. 4. Just call the function as if the feature always exists. 5. If you find you still need a version in the importer, then you didn't fully encapsulate the feature. Go back to step 1.
Mar 17 2014
parent reply Iain Buclaw <ibuclaw gdcproject.org> writes:
On 17 March 2014 08:06, Walter Bright <newshound2 digitalmars.com> wrote:
 On 3/17/2014 12:39 AM, Iain Buclaw wrote:
 If I recall, he was saying that you must pass
 -fversion=CAIRO_HAS_PNG_SUPPORT to
 every file that imports it was the problem, because you want PNG support,
 not stubs.
Stub out the functions for PNG support. Then just call them, they will do nothing if PNG isn't supported. There is NO NEED for the callers to set version.
 It's more an example where you need a build system in place for a simple
 hello
 world in cairoD if you don't want to be typing too much just to get your
 test
 program built. :)
If you need to -fversion=CAIRO_HAS_PNG_SUPPORT for every file that imports it, you have completely misunderstood the design I suggested. 1. Encapsulate the feature in a function. 2. Implement the function in module X. Module X is the ONLY module that needs the version. In module X, define the function to do nothing if version is false. 3. Nobody who imports X has to define the version. 4. Just call the function as if the feature always exists. 5. If you find you still need a version in the importer, then you didn't fully encapsulate the feature. Go back to step 1.
Right, but going back full circle to the original comment: "For example cairoD wraps the cairo C library. cairo can be compiled without or with PNG support. Historically cairoD used version(CAIRO_HAS_PNG_SUPPORT) for this." Requires that cairoD have this encapsulation you suggest, but also requires detection in some form of configure system that checks: 1) Is cairo installed? (Mandatory, fails without) 2) Does the installed version of cairo have PNG suport (If true, set build to compile a version of module X with version=CAIRO_HAS_PNG_SUPPORT)
Mar 17 2014
next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 3/17/2014 1:35 AM, Iain Buclaw wrote:
 Right,
If so, why do all modules need the version statement?
Mar 17 2014
parent reply Iain Buclaw <ibuclaw gdcproject.org> writes:
On 17 March 2014 08:55, Walter Bright <newshound2 digitalmars.com> wrote:
 On 3/17/2014 1:35 AM, Iain Buclaw wrote:
 Right,
If so, why do all modules need the version statement?
That is a question to ask the historical maintainers of cairoD. Having a look at it now. It has a single config.d with enum bools to turn on/off features.
Mar 17 2014
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 3/17/2014 2:32 AM, Iain Buclaw wrote:
 On 17 March 2014 08:55, Walter Bright <newshound2 digitalmars.com> wrote:
 On 3/17/2014 1:35 AM, Iain Buclaw wrote:
 Right,
If so, why do all modules need the version statement?
That is a question to ask the historical maintainers of cairoD. Having a look at it now. It has a single config.d with enum bools to turn on/off features.
If those enums are controlled by a version statement, then the version will have to be set for every source file that imports it. This is not the best design - the abstractions are way too leaky.
Mar 17 2014
parent reply Johannes Pfau <nospam example.com> writes:
Am Mon, 17 Mar 2014 03:49:24 -0700
schrieb Walter Bright <newshound2 digitalmars.com>:

 On 3/17/2014 2:32 AM, Iain Buclaw wrote:
 On 17 March 2014 08:55, Walter Bright <newshound2 digitalmars.com>
 wrote:
 On 3/17/2014 1:35 AM, Iain Buclaw wrote:
 Right,
If so, why do all modules need the version statement?
That is a question to ask the historical maintainers of cairoD. Having a look at it now. It has a single config.d with enum bools to turn on/off features.
If those enums are controlled by a version statement, then the version will have to be set for every source file that imports it. This is not the best design - the abstractions are way too leaky.
It's meant to be set at configure time, when the library is being built, by a configure script or similar. They're not controlled by version statements at all. That's nothing special, it's config.h for D. The reason all modules needed the version statement was that I didn't use the stub-function trick. Cairo also has classes which can be available or unavailable. Stubbing all these classes doesn't seem to be a good solution. I also think it's bad API design if a user can call a stub 'savePNG' function which just does nothing. A perfect solution for cairoD needs to handle all these cases: cairo has PNG support: true false user wants to use PNG: optional, true, false optional, true, false ok ok ok ok error ok with config.d and static if: ----------------------- enum bool CAIRO_HAS_PNG_SUPPORT = true; //true/false is inserted by //configure script static if(CAIRO_HAS_PNG_SUPPORT) void savePNG(); ----------------------- library users can do this: ----------------------- import cairo.config; static if(!CAIRO_HAS_PNG_SUPPORT) assert(false, "Need PNG support"); static if(CAIRO_HAS_PNG_SUPPORT) //Offer option to optionally save file as PNG as well ----------------------- if they don't check for CAIRO_HAS_PNG_SUPPORT and just use savePNG then (1) it'll work if PNG support is available (2) the function is not defined if png support is not available With versions the user has no way to know if the library actually supports PNG or not. He can only guess and the optional case can't be implemented at all.
Mar 17 2014
parent Walter Bright <newshound2 digitalmars.com> writes:
On 3/17/2014 11:46 AM, Johannes Pfau wrote:
 With versions the user has no way to know if the library actually
 supports PNG or not. He can only guess and the optional case can't be
 implemented at all.
I don't know cairoD's design requirements or tradeoffs so I will speak generally. I suggest solving this by raising the level of abstraction. At some point, in user code, there's got to be: if (CAIRO_HAS_PNG_SUPPORT) doThis(); else doThat(); I suggest adding the following to the Cairo module: void doSomething() { if (CAIRO_HAS_PNG_SUPPORT) doThis(); else doThat(); } and the user code becomes: doSomething();
Mar 17 2014
prev sibling parent reply Johannes Pfau <nospam example.com> writes:
Am Mon, 17 Mar 2014 08:35:45 +0000
schrieb Iain Buclaw <ibuclaw gdcproject.org>:

 If you need to -fversion=CAIRO_HAS_PNG_SUPPORT for every file that
 imports it, you have completely misunderstood the design I
 suggested.

 1. Encapsulate the feature in a function.

 2. Implement the function in module X. Module X is the ONLY module
 that needs the version. In module X, define the function to do
 nothing if version is false.

 3. Nobody who imports X has to define the version.

 4. Just call the function as if the feature always exists.
Clever, but potentially dangerous once cross-module inlining starts working (The inlined code could be different from the code in the library).
Mar 17 2014
parent Walter Bright <newshound2 digitalmars.com> writes:
On 3/17/2014 11:31 AM, Johannes Pfau wrote:
 Clever, but potentially dangerous once cross-module inlining starts
 working (The inlined code could be different from the code in the
 library).
True, but you can use .di files to prevent that.
Mar 17 2014
prev sibling parent Michel Fortin <michel.fortin michelf.ca> writes:
On 2014-03-17 01:20:37 +0000, Walter Bright <newshound2 digitalmars.com> said:

 On 3/15/2014 6:44 AM, Johannes Pfau wrote:
 Then in cairo.d
 version(CAIRO_HAS_PNG_SUPPORT)
 {
     extern(C) int cairo_save_png(char* x);
     void savePNG(string x){cairo_save_png(toStringz(x))};
 }
try adding: else { void savePNG(string x) { } } and then your users can just call savePNG without checking the version.
Adding a stub that does nothing, not even a runtime error, isn't a very good solution in my book. If this function call should fail, it should fail early and noisily. So here's my suggestion: use a template function for the wrapper. extern(C) int cairo_save_png(char* x); void savePNG()(string x){cairo_save_png(toStringz(x));} If you call it somewhere it and cairo_save_png was not compiled in Cairo, you'll get a link-time error (undefined symbol cairo_save_png). If you don't call savePNG anyhere there's no issue because savePNG was never instantiated. -- Michel Fortin michel.fortin michelf.ca http://michelf.ca
Mar 17 2014
prev sibling parent Walter Bright <newshound2 digitalmars.com> writes:
On 3/14/2014 7:34 AM, Mike Parker wrote:
 For me, the issue is that this has to go in *every* module that needs
 MeaningfulVersion and OtherMeaningfulVersion to be defined. That's a lot more
 points to track than a single line of ORed versions in each module.

 When I do need something like this, I just define some manifest constants in a
 config module, import that everywhere, and use static if. But that feels very
 much like the workaround it is. I would much prefer to have boolean versions.
I know this seems to make perfect sense, and that is how things are done in the C world. I've spent decades doing it that way. I grew to hate it, but had a hard time thinking of a better way, because I was so used to doing it that way. There's a better way, though. In my posts, I listed a couple ways to avoid having to do this in every module. BTW, the static if approach was used in druntime, and caused a bug that took me hours to track down. I PR'd that out.
 The bottom line is that I'm doing exactly what Walter apparently doesn't want
me
 to do anyway, just with the added annoyance of importing an extra module to get
 it done.
Your code is your business, but while ah gots bref in mah body, that won't be in druntime/phobos :-)
Mar 14 2014
prev sibling parent reply 1100110 <0b1100110 gmail.com> writes:
On 3/14/14, 8:21, Regan Heath wrote:
 On Fri, 14 Mar 2014 10:22:40 -0000, 1100110 <0b1100110 gmail.com> wrote:

 On 3/14/14, 4:58, Regan Heath wrote:
 Maintenance is very slightly better too, IMO, because you
 add/remove/alter a complete line rather than editing a set of || && etc
 which can in some cases be a little confusing.  Basically, the chance of
 an error is very slightly lower.

 For example, either this:

 version(X86) version = MeaningfulVersion
 version(X86_64) version = MeaningfulVersion
 version(PPC) version = MeaningfulVersion
 version(PPC64) version = MeaningfulVersion
 version(ARM) version = MeaningfulVersion
 version(AArch64) version = MeaningfulVersion

 version(MeaningfulVersion)
 {
 }
 else version (MIPS32)
 {
 }

 or this:

 version (X86) version = MeaningfulVersion
 version (X86_64) version = MeaningfulVersion
 version (PPC) version = MeaningfulVersion
 version (PPC64) version = MeaningfulVersion
 version (ARM) version = MeanigfulVersion
 version (AArch64) version = MeaningfulVersion

 version (MIPS32) version = OtherMeaningfulVersion

 version (MeaningfulVersion)
 {
 }
 else version (OtherMeaningfulVersion)
 {
 }

 Regan
...I can't even begin to describe how much more readable the OR'd version is.
It's shorter, but shorter does not mean more "readable".. if by readable you mean include the ability to communicate intent etc. Add to that, that readable is just one metric. Walter's point is that the above pattern is better at communicating intent, clarifying your logic, and making the resulting version statements easier to understand (aka "more readable") R
That's an awful lot of typo opportunities.... Quick! which one did I change!?
Mar 14 2014
parent "Regan Heath" <regan netmail.co.nz> writes:
On Fri, 14 Mar 2014 14:46:33 -0000, 1100110 <0b1100110 gmail.com> wrote:
 That's an awful lot of typo opportunities....   Quick!  which one did I  
 change!?
Copy/paste. R -- Using Opera's revolutionary email client: http://www.opera.com/mail/
Mar 14 2014
prev sibling parent "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Fri, 14 Mar 2014 05:58:28 -0400, Regan Heath <regan netmail.co.nz>  
wrote:

 On Fri, 14 Mar 2014 08:51:05 -0000, 1100110 <0b1100110 gmail.com> wrote:
      version (X86 || X86_64 || PPC || PPC64 || ARM || AArch64)
      {
          enum RTLD_LAZY = 0x00001;
          enum RTLD_NOW = 0x00002;
          enum RTLD_GLOBAL = 0x00100;
          enum RTLD_LOCAL = 0x00000;
      }
      else version (MIPS32)
      {
          enum RTLD_LAZY = 0x0001;
          enum RTLD_NOW = 0x0002;
          enum RTLD_GLOBAL = 0x0004;
          enum RTLD_LOCAL = 0;
      }
Walter's point, I believe, is that you should define a meaningful version identifier for each specific case, and that this is "better" because then you're less concerned about where it's supported and more concerned with what it is which is/isn't supported. Maintenance is very slightly better too, IMO, because you add/remove/alter a complete line rather than editing a set of || && etc which can in some cases be a little confusing. Basically, the chance of an error is very slightly lower. For example, either this: version(X86) version = MeaningfulVersion version(X86_64) version = MeaningfulVersion version(PPC) version = MeaningfulVersion version(PPC64) version = MeaningfulVersion version(ARM) version = MeaningfulVersion version(AArch64) version = MeaningfulVersion version(MeaningfulVersion) { } else version (MIPS32) { } or this: version (X86) version = MeaningfulVersion version (X86_64) version = MeaningfulVersion version (PPC) version = MeaningfulVersion version (PPC64) version = MeaningfulVersion version (ARM) version = MeaningfulVersion version (AArch64) version = MeaningfulVersion version (MIPS32) version = OtherMeaningfulVersion version (MeaningfulVersion) { } else version (OtherMeaningfulVersion) { } Regan
I think the point we are trying to make is, what if MeaningfulVersion does not exist? That is, how do you attribute a name to those flags? Then it becomes a "where's waldo" to see if your particular platform defines the arbitrary name you had to choose. There's not always an easy to define symbol for everything. BTW, && does not present the same problem, because there isn't much difference between: version(x && y) version(x) version(y) But there is a huge difference between version(x || y) version(x) version = somearbitrarysymbol; version(y) version = somearbitrarysymbol; version(somearbitrarysymbol) -Steve
Mar 14 2014
prev sibling next sibling parent reply "Rikki Cattermole" <alphaglosined gmail.com> writes:
Another option is using pure functions and static if's.

const bool ProgrammingSections = getProgrammingSections;
const bool PasteBinFullSection = getPasteBinFullSection;

pure bool getProgrammingSections() {
	if (CompleteSite) {
		return true;
	} else {
		return false;
	}
}

pure bool getPasteBinFullSection() {
	if (CompleteSite || ProgrammingSections) {
		return true;
	} else {
		return false;
	}
}

static if (ProgrammingSections) {
     // yay programming
}


static if (PasteBinFullSection) {
     // Ooo pastebin!
}


Also works rather well through different modules.
Mar 14 2014
parent "Rikki Cattermole" <alphaglosined gmail.com> writes:
On Friday, 14 March 2014 at 10:30:47 UTC, Rikki Cattermole wrote:
 Another option is using pure functions and static if's.

 const bool ProgrammingSections = getProgrammingSections;
 const bool PasteBinFullSection = getPasteBinFullSection;

 pure bool getProgrammingSections() {
 	if (CompleteSite) {
 		return true;
 	} else {
 		return false;
 	}
 }

 pure bool getPasteBinFullSection() {
 	if (CompleteSite || ProgrammingSections) {
 		return true;
 	} else {
 		return false;
 	}
 }

 static if (ProgrammingSections) {
     // yay programming
 }


 static if (PasteBinFullSection) {
     // Ooo pastebin!
 }


 Also works rather well through different modules.
Or for more advanced usage that could be quite useful in phobos: pure bool isVersionDefinedAnd(ARGS...)() { foreach(name; ARGS) { mixin("version(" ~ name ~ ") {} else return false;"); } return true; } enum isLinux1And = isVersionDefinedAnd!("linux"); enum isLinux2And = isVersionDefinedAnd!("linux", "Posix"); pragma(msg, isLinux1And); pragma(msg, isLinux2And); pure bool isVersionDefinedOr(ARGS...)() { foreach(name; ARGS) { mixin("version(" ~ name ~ ") return true;"); } return false; } enum isMainPlatform = isVersionDefinedOr!("linux", "Windows", "Posix", "Android", "OSX"); pragma(msg, isMainPlatform);
Mar 14 2014
prev sibling parent Walter Bright <newshound2 digitalmars.com> writes:
On 3/14/2014 1:51 AM, 1100110 wrote:
 Actually, very little of that.
I don't know what you'd call this then... Exact same bit of code, repeated multiple times for versions which could be OR'd together.
I don't call it code duplication. The code may look the same, but it is NOT. Arbitrary similarities between sections of code that are disparate is not code duplication. The values are magic numbers specific to particular platforms, and should be treated as such. They are not common code between platforms. I know what always happens (including in druntime) when this sort of "duplication" was eschewed - one value would get "fixed" for platform X, and then platforms Y and Z silently broke. Every time. (It's also not code duplication because only one branch of the versions ever winds up in the executable, never multiple branches.)
Mar 14 2014
prev sibling parent reply "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Thu, 13 Mar 2014 18:43:36 -0400, Walter Bright  
<newshound2 digitalmars.com> wrote:

 On 3/13/2014 3:21 PM, Timon Gehr wrote:
 On 03/13/2014 02:32 PM, Steven Schveighoffer wrote:
 The one I would really like to see is logical OR. There is no easy way
 around this, one must come up with convoluted mechanisms that are much
 harder to design, write, and understand than just version(x || y)
version(A) version=AorB; version(B) version=AorB; version(AorB){ }
If you're writing things like that, it's still missing the point. The point is not to find workarounds, but to rethink just what feature is being version'd on.
There are some times where AorB is the best description, and the machinery to factor out is just unnecessarily verbose. I'm not denying that in some cases, when each version requires it's own unique block (Andrei's is a good example) that avoiding OR expressions makes a lot more sense. But, those are not all cases. For example, if you have 4 different versions, and 3 of the blocks are the same in module x: version(a) { decla; declb; declc; } version(b) { decla; declb; declc; } version(c) { decla; declb; declc; } version(d) { decld; decle; declf; } else assert(0, "unsupported version!"); Factoring this out becomes an exercise in treasure-map reading: version(blahblahblah) { decla; declb; declc; } version(d) { decld; decle; declf; } else assert(0, "unsupported version!"); Now, I have to go find blahblahblah. Maybe there's even another factoring, and blahblahblah is defined by another intermediate version. Maybe in another block, I have to use yaddayaddayadda, because only a and b use a common block and c is different. Granted, I can invent names for these that make sense, and in some cases (like version(posix) or version(unixen) or something) it makes a lot of sense that anything which supports them will go into that version. I get that. But when I'm looking at something like blahblahblah, and wondering if it's compiled with version(a), I have to go on a hunt. Why not just: version(a || b || c) In fact, it probably is a good idea to do: version(blahblahblah) // a || b || c to be clearer... Note that it's already in your best interest to factor a || b || c into another identifier, instead of having to repeat that over and over again. But sometimes, it's just unnecessarily verbose hoops you have to jump through when reading or writing versioned code. Also note that many many times, a, b, and c are mutually exclusive. So there will only be OR clauses for those, never AND. The code is readable and straightforward with boolean operators, convoluted and verbose without. In any case, I don't expect any movement on this. Just had to rant ;) -Steve
Mar 14 2014
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 3/14/14, 7:18 AM, Steven Schveighoffer wrote:
 There are some times where AorB is the best description, and the
 machinery to factor out is just unnecessarily verbose.
Innocently entering the fray :o). 1. I don't quite understand the sheer strength of opinion of Walter's in this particular matter. It's as if you chat with a friend and suddenly you figure he has a disproportionately strong opinion on some small matter. The argument is always the same invoking some so-bad-it's-painful-to-recount past experience. I do take the point but can't stop thinking I also have past experience to draw from that's not as traumatizing. Something that I did notice is very bad (and version awesomely avoids it) is the possibility to re#define macros in C such that the meaning of code depends on what has been sequentially read/included. But Boolean operators don't seem to have had a huge impact. So I'm always left a bit confused. 2. I have comparable difficulty understanding why the "opposition" feels also so strongly about it. There are various good means to approach versioning challenges without compromising. (Rikki's idea was new to me!) Andrei
Mar 14 2014
parent Walter Bright <newshound2 digitalmars.com> writes:
Take a look at the source code to dmd, and grep for '#if'.

I've made a concerted effort to get rid of versioning. There's still some left, 
but it's largely gone. I consider this a vast improvement over the typical
C/C++ 
program loaded to the gills with #if's.

(Nearly all the #if's in dmd are to turn logging on and off, which is something 
other than versioning.)

Nobody has commented that I recall on the lack of #if's in dmd source. I
suspect 
it's like when you have a headache, and then suddenly you realize the headache 
is gone and you have no recollection of just when it went away :-)
Mar 14 2014
prev sibling parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 3/13/14, 3:21 PM, Timon Gehr wrote:
 On 03/13/2014 02:32 PM, Steven Schveighoffer wrote:
 The one I would really like to see is logical OR. There is no easy way
 around this, one must come up with convoluted mechanisms that are much
 harder to design, write, and understand than just version(x || y)
version(A) version=AorB; version(B) version=AorB; version(AorB){ }
From a project at work: // Import the appropriate built-in #define menagerie version (gcc4_7_1) { version = gnu; import defines_gcc4_7_1, defines_gxx4_7_1; } version (gcc4_8_1) { version = gnu; import defines_gcc4_8_1, defines_gxx4_8_1; } version (clang3_2) { version = clang; import defines_clang3_2, defines_clangxx3_2; } version (clang3_4) { version = clang; import defines_clang3_4, defines_clangxx3_4; } version (clangdev) { version = clang; import defines_clangdev, defines_clangxxdev; } ... // Compiler-dependent extra defines version (gnu) { ... } version (clang) { ... } yum Andrei
Mar 13 2014
prev sibling next sibling parent reply "bearophile" <bearophileHUGS lycos.com> writes:
Walter Bright:

 The argument for final by default, as eloquently expressed by 
 Manu, is a good one. Even Andrei agrees with it (!).

 The trouble, however, was illuminated most recently by the 
 std.json regression that broke existing code. The breakage 
 wasn't even intentional; it was a mistake.
Making final by default is a intentional and planned change, first you introduce "virtual" (it's already present in dmd 2.066alpha), then you give a warning, and then you deprecate things, and then later you generate errors. This gives time to people to fix the code. Even languages far older and far more widespread than D change, like the nullptr of C++ that replaces the 0 as null pointer. People that use D for production code should expect to give a look at the changelog every time a D version comes out and fix the code accordingly. I am keeping a large amount of D2 code updated and introducing the usage of "virtual" in my code will take an amount of time that is little compared to actually writing new code, refactoring code for other purposes, fixing bugs, etc. I don't think you can write D code today and expect it to work perfectly years from now. You have to keep your code updated or keep using the same compiler version. We can introduce ways to better manage the change, like the "deprecate" keyword, introducing a refactoring tool like the one in Go language, and keep some backwards incompatible changes that the community and you regard as sufficiently important (like deprecating some usages of the comma operator, etc). I don't even care much about "final by default". Also in D there are several features that are "going to be deprecated". Like old style operator overloading, the built-in sort, and more and more. Keeping such things in the language for years without even a deprecation messages is bad. In the D.learn newsgroup I keep saying to people "don't use that, it's going to be deprecated". People use those things and if someday they will get actually deprecated they will cause a damage, perhaps comparable to introducing final by default. So please add deprecation messages now for all things that should be deprecated. Bye, bearophile
Mar 12 2014
parent "Mike" <none none.com> writes:
On Wednesday, 12 March 2014 at 23:51:41 UTC, bearophile wrote:
 Walter Bright:

 The argument for final by default, as eloquently expressed by 
 Manu, is a good one. Even Andrei agrees with it (!).

 The trouble, however, was illuminated most recently by the 
 std.json regression that broke existing code. The breakage 
 wasn't even intentional; it was a mistake.
Making final by default is a intentional and planned change, first you introduce "virtual" (it's already present in dmd 2.066alpha), then you give a warning, and then you deprecate things, and then later you generate errors. This gives time to people to fix the code. Even languages far older and far more widespread than D change, like the nullptr of C++ that replaces the 0 as null pointer. People that use D for production code should expect to give a look at the changelog every time a D version comes out and fix the code accordingly. I am keeping a large amount of D2 code updated and introducing the usage of "virtual" in my code will take an amount of time that is little compared to actually writing new code, refactoring code for other purposes, fixing bugs, etc. I don't think you can write D code today and expect it to work perfectly years from now. You have to keep your code updated or keep using the same compiler version. We can introduce ways to better manage the change, like the "deprecate" keyword, introducing a refactoring tool like the one in Go language, and keep some backwards incompatible changes that the community and you regard as sufficiently important (like deprecating some usages of the comma operator, etc). I don't even care much about "final by default". Also in D there are several features that are "going to be deprecated". Like old style operator overloading, the built-in sort, and more and more. Keeping such things in the language for years without even a deprecation messages is bad. In the D.learn newsgroup I keep saying to people "don't use that, it's going to be deprecated". People use those things and if someday they will get actually deprecated they will cause a damage, perhaps comparable to introducing final by default. So please add deprecation messages now for all things that should be deprecated. Bye, bearophile
+1
Mar 12 2014
prev sibling next sibling parent reply "Chris Williams" <yoreanon-chrisw yahoo.co.jp> writes:
On Wednesday, 12 March 2014 at 22:50:00 UTC, Walter Bright wrote:
 But we nearly lost a major client over it.

 We're past the point where we can break everyone's code.
As someone who would like to be able to use D as a language, professionally, it's more important to me that D gain future clients than that it maintains the ones that it has. Even more important is that it does both of those things. The JSON implementation, for example, is fine if you just want to read JSON data and if you are willing to look at the std.json.d file to figure out what the data structures look like, so you can actually interact with them. The whole thing should be replaced, because JSON is a fairly big part of modern life, and not having a very usable library for it is the sort of thing that would prevent someone from considering the language for a project. To some extent, making changes that are non-breaking is good. But I don't think that's a full solution. Some of the features of the language just aren't where they need to be to encourage mass adoption, and fixing those will be breaking. Sooner or later it's going to be necessary to either branch and support old versions of the compiler and the library, and/or to provide compatability flags. Flags like "--strict" are fairly common, so having a compiler mode that defaults to final declarations, defaults to nothrow, pure, safe, etc. seems pretty reasonable. You could even make that the default mode, with "--nonstrict" or something as a compiler flag preserving backwards compatability (eternally, or for a given period of time). Alternately, one could do like Java and have target versions for compiling. If you did that, you would probably want to lump breaking changes into a single release every few years, so that there isn't a different version number for every update. I like the idea of having final: and !final:, but I think that compiler flags are the right answer for how to approach defaulting final.
Mar 12 2014
next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 3/12/2014 5:02 PM, Chris Williams wrote:
 As someone who would like to be able to use D as a language, professionally,
 it's more important to me that D gain future clients than that it maintains the
 ones that it has. Even more important is that it does both of those things.
The D1 -> D2 transition very nearly destroyed D by sacrificing all the momentum it had.
Mar 12 2014
next sibling parent "Chris Williams" <yoreanon-chrisw yahoo.co.jp> writes:
On Thursday, 13 March 2014 at 00:15:50 UTC, Walter Bright wrote:
 On 3/12/2014 5:02 PM, Chris Williams wrote:
 As someone who would like to be able to use D as a language, 
 professionally,
 it's more important to me that D gain future clients than that 
 it maintains the
 ones that it has. Even more important is that it does both of 
 those things.
The D1 -> D2 transition very nearly destroyed D by sacrificing all the momentum it had.
I didn't propose abandoning the current standard.
Mar 12 2014
prev sibling parent reply Manu <turkeyman gmail.com> writes:
On 13 March 2014 10:15, Walter Bright <newshound2 digitalmars.com> wrote:

 On 3/12/2014 5:02 PM, Chris Williams wrote:

 As someone who would like to be able to use D as a language,
 professionally,
 it's more important to me that D gain future clients than that it
 maintains the
 ones that it has. Even more important is that it does both of those
 things.
The D1 -> D2 transition very nearly destroyed D by sacrificing all the momentum it had.
To draw that as a comparison to the issue on topic is one of the biggest exaggerations I've seen in a while, and you're not usually prone to that sort of thing.
Mar 12 2014
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 3/12/14, 8:35 PM, Manu wrote:
 On 13 March 2014 10:15, Walter Bright <newshound2 digitalmars.com
 <mailto:newshound2 digitalmars.com>> wrote:

     On 3/12/2014 5:02 PM, Chris Williams wrote:

         As someone who would like to be able to use D as a language,
         professionally,
         it's more important to me that D gain future clients than that
         it maintains the
         ones that it has. Even more important is that it does both of
         those things.


     The D1 -> D2 transition very nearly destroyed D by sacrificing all
     the momentum it had.


 To draw that as a comparison to the issue on topic is one of the biggest
 exaggerations I've seen in a while, and you're not usually prone to that
 sort of thing.
Actually a lot of measurements (post statistics, downloads) and plenty of evidence (D-related posts on reddit) support that hypothesis. The transition was a shock of much higher magnitude than both Walter and I anticipated. Andrei
Mar 12 2014
parent reply Manu <turkeyman gmail.com> writes:
On 13 March 2014 14:52, Andrei Alexandrescu
<SeeWebsiteForEmail erdani.org>wrote:

 On 3/12/14, 8:35 PM, Manu wrote:

 On 13 March 2014 10:15, Walter Bright <newshound2 digitalmars.com
 <mailto:newshound2 digitalmars.com>> wrote:

     On 3/12/2014 5:02 PM, Chris Williams wrote:

         As someone who would like to be able to use D as a language,
         professionally,
         it's more important to me that D gain future clients than that
         it maintains the
         ones that it has. Even more important is that it does both of
         those things.


     The D1 -> D2 transition very nearly destroyed D by sacrificing all
     the momentum it had.


 To draw that as a comparison to the issue on topic is one of the biggest
 exaggerations I've seen in a while, and you're not usually prone to that
 sort of thing.
Actually a lot of measurements (post statistics, downloads) and plenty of evidence (D-related posts on reddit) support that hypothesis. The transition was a shock of much higher magnitude than both Walter and I anticipated.
You're seriously comparing a deprecation warning telling you to write 'virtual' infront of virtuals to the migration from D1 to D2?
Mar 12 2014
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 3/12/2014 10:15 PM, Manu wrote:
 You're seriously comparing a deprecation warning telling you to write 'virtual'
 infront of virtuals to the migration from D1 to D2?
You're talking about changing practically every D class in existence. (The D1 => D2 transition also came with plenty of compiler warnings help.)
Mar 12 2014
parent Manu <turkeyman gmail.com> writes:
On 13 March 2014 15:55, Walter Bright <newshound2 digitalmars.com> wrote:

 On 3/12/2014 10:15 PM, Manu wrote:

 You're seriously comparing a deprecation warning telling you to write
 'virtual'
 infront of virtuals to the migration from D1 to D2?
You're talking about changing practically every D class in existence.
Only base classes, and only functions that are actually overridden. I don't want to raise 'override' again, but we know precisely the magnitude this change would affect users; significantly less than that. (The D1 => D2 transition also came with plenty of compiler warnings help.)

But the scale of changes required was extremely drastic, and non-trivial in
aggregate. That's not comparable to "put this word here", which is what
we're talking about.
Mar 12 2014
prev sibling next sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 3/12/14, 5:02 PM, Chris Williams wrote:
 On Wednesday, 12 March 2014 at 22:50:00 UTC, Walter Bright wrote:
 But we nearly lost a major client over it.

 We're past the point where we can break everyone's code.
As someone who would like to be able to use D as a language, professionally, it's more important to me that D gain future clients than that it maintains the ones that it has. Even more important is that it does both of those things.
The saying goes, "you can't make a bucket of yogurt without a spoonful of rennet". The pattern of resetting customer code into the next version must end. It's the one thing that both current and future users want: a pattern of stability and reliability.
 I like the idea of having final: and !final:, but I think that compiler
 flags are the right answer for how to approach defaulting final.
Sorry, no. We are opposed to having compiler flags define language semantics. Andrei
Mar 12 2014
next sibling parent reply "Chris Williams" <yoreanon-chrisw yahoo.co.jp> writes:
On Thursday, 13 March 2014 at 00:18:06 UTC, Andrei Alexandrescu 
wrote:
 Sorry, no. We are opposed to having compiler flags define 
 language semantics.
If done excessively, I could certainly see that. But outside of new languages that haven't gotten to that point yet, I don't know of any that don't have compiler/runtime flags of this sort. E.g. Java, Perl, C, C++, PHP, etc. I would be curious why you think D can escape this fate? The only alternatives are: 1. Adding new syntax for things that are effectively the same (e.g. typedef vs phobos typedef) until the language definition is so long and full of so many variants that code by different people is mutually unintelligible, depending on when that person started learning the language, and the language starts to look like Perl with all the various symbols used to denote every other thing. 2. Deciding the language is perfect, regardless of whether it has ever reached a state that draws in clients.
Mar 12 2014
parent reply Nick Sabalausky <SeeWebsiteToContactMe semitwist.com> writes:
On 3/12/2014 8:29 PM, Chris Williams wrote:
 If done excessively, I could certainly see that. But outside of new
 languages that haven't gotten to that point yet, I don't know of any
 that don't have compiler/runtime flags of this sort. E.g. Java, Perl, C,
 C++, PHP, etc. I would be curious why you think D can escape this fate?
PHP is a perfect example of why language-altering flags is a very bad path to start heading down. (Granted, the problem is *vastly* worse in an interpreted language than a compiled one, but still.)
Mar 12 2014
parent Walter Bright <newshound2 digitalmars.com> writes:
On 3/12/2014 11:31 PM, Nick Sabalausky wrote:
 PHP is a perfect example of why language-altering flags is a very bad path to
 start heading down. (Granted, the problem is *vastly* worse in an interpreted
 language than a compiled one, but still.)
There are examples with C and C++ compilers, too. My most unfavorite is the one that sets the signedness of 'char'.
Mar 13 2014
prev sibling next sibling parent reply "Vladimir Panteleev" <vladimir thecybershadow.net> writes:
On Thursday, 13 March 2014 at 00:18:06 UTC, Andrei Alexandrescu 
wrote:
 On 3/12/14, 5:02 PM, Chris Williams wrote:
 As someone who would like to be able to use D as a language,
 professionally, it's more important to me that D gain future 
 clients
 than that it maintains the ones that it has. Even more 
 important is that
 it does both of those things.
The saying goes, "you can't make a bucket of yogurt without a spoonful of rennet". The pattern of resetting customer code into the next version must end. It's the one thing that both current and future users want: a pattern of stability and reliability.
Doesn't this sort of seal the language's fate in the long run, though? Eventually, new programming languages will appear which will learn from D's mistakes, and no new projects will be written in D. Wasn't it here that I heard that a language which doesn't evolve is a dead language? From looking at the atmosphere in this newsgroup, at least to me it appears obvious that there are, in fact, D users who would be glad to have their D code broken if it means that it will end up being written in a better programming language. (I'm one of them, for the record; I regularly break my own code anyway when refactoring my library.) Although I'm not advocating forking off a D3 here and now, the list of things we wish we could fix is only going to grow.
Mar 12 2014
next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 3/12/2014 5:40 PM, Vladimir Panteleev wrote:
 Doesn't this sort of seal the language's fate in the long run, though?
 Eventually, new programming languages will appear which will learn from D's
 mistakes, and no new projects will be written in D.

 Wasn't it here that I heard that a language which doesn't evolve is a dead
 language?

  From looking at the atmosphere in this newsgroup, at least to me it appears
 obvious that there are, in fact, D users who would be glad to have their D code
 broken if it means that it will end up being written in a better programming
 language. (I'm one of them, for the record; I regularly break my own code
anyway
 when refactoring my library.) Although I'm not advocating forking off a D3 here
 and now, the list of things we wish we could fix is only going to grow.
There are costs and benefits: Benefits: 1. attracting new people with better features Costs: 2. losing existing users by breaking their code, losing new people because of a reputation for instability There aren't clearcut answers. It's a judgement call. The final-by-default has very large breakage costs, and its improvement is minor and achievable by other means.
Mar 12 2014
next sibling parent reply Manu <turkeyman gmail.com> writes:
On 13 March 2014 11:13, Walter Bright <newshound2 digitalmars.com> wrote:

 On 3/12/2014 5:40 PM, Vladimir Panteleev wrote:

 Doesn't this sort of seal the language's fate in the long run, though?
 Eventually, new programming languages will appear which will learn from
 D's
 mistakes, and no new projects will be written in D.

 Wasn't it here that I heard that a language which doesn't evolve is a dead
 language?

  From looking at the atmosphere in this newsgroup, at least to me it
 appears
 obvious that there are, in fact, D users who would be glad to have their
 D code
 broken if it means that it will end up being written in a better
 programming
 language. (I'm one of them, for the record; I regularly break my own code
 anyway
 when refactoring my library.) Although I'm not advocating forking off a
 D3 here
 and now, the list of things we wish we could fix is only going to grow.
There are costs and benefits: Benefits: 1. attracting new people with better features Costs: 2. losing existing users by breaking their code, losing new people because of a reputation for instability There aren't clearcut answers. It's a judgement call. The final-by-default has very large breakage costs, and its improvement is minor and achievable by other means.
It's not minor, and it's not achievable by other means though. It's also not a very large breaking change. In relative terms, it's already quantified - expected to be much smaller than override was; this only affects bases, override affected all branches and leaves. You and Andrei are the only resistance in this thread so far. Why don't you ask 'temperamental client' what their opinion is? Give them a heads up, perhaps they'll be more reasonable than you anticipate? Both myself and Don have stated on behalf of industrial clients that we embrace breaking changes that move the language forward, or correct clearly identifiable mistakes. One of the key advantages to D over other languages in my mind is precisely it's fluidity. The idea that this is a weakness doesn't resonate with me in any way (assuming that it's managed in a sensible/predictable/well-communicated manner). I *like* fixing my code when a breaking change fixes something that was unsatisfactory, and it seems that most others present feel this way too. I've used C++ for a long time, I know very well how much I hate carrying language baggage to the end of my years. That said, obviously there's a big difference between random breakage and controlled deprecation. The 2 need to stop being conflated. I don't think they are the same thing, and can't reasonably be compared. I've never heard anybody object to the latter if it's called for. The std.json example you raise is a clear example of the former.
Mar 12 2014
next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 3/12/2014 9:23 PM, Manu wrote:
 It's not minor, and it's not achievable by other means though.
class C { final: ... } does it.
 You and Andrei are the only resistance in this thread so far. Why don't you ask
 'temperamental client' what their opinion is? Give them a heads up, perhaps
 they'll be more reasonable than you anticipate?
I didn't even know about this client before the breakage. D has a lot of users who we don't know about.
 Both myself and Don have stated on behalf of industrial clients that we embrace
 breaking changes that move the language forward, or correct clearly
identifiable
 mistakes.
Breaking changes has been a huge barrier to Don's company being able to move from D1 to D2. I still support D1 specifically for Don's company.
Mar 12 2014
next sibling parent "Dicebot" <public dicebot.lv> writes:
On Thursday, 13 March 2014 at 06:02:27 UTC, Walter Bright wrote:
 Both myself and Don have stated on behalf of industrial 
 clients that we embrace
 breaking changes that move the language forward, or correct 
 clearly identifiable
 mistakes.
Breaking changes has been a huge barrier to Don's company being able to move from D1 to D2. I still support D1 specifically for Don's company.
Which has resulted in awkward situation where some of D2 feaures are really wanted but doing transition "at once" it too much of an effort to be managed easily. D1 -> D2 had one huge breaking change that made the most difference - const qualifier system. This change did not have an obvious migration path, was not backed by compiler help and had lot of not so obvious consequences to language semantics. There is no harm in doing small breaking changes if good transition path is provided. I personally believe that far more damaging factor is that users often expect DMD versions to be minor version releases while those are in fact always major releases (which implies some possibility of breakage by definition). Also while C/C++ itself is pretty stable, real compilers are not. I can't imagine anyone suddenly upgrading gcc version on production scale and expecting stuff to "just work". So it is wrong example to refer to.
Mar 13 2014
prev sibling next sibling parent reply Andrej Mitrovic <andrej.mitrovich gmail.com> writes:
On 3/13/14, Walter Bright <newshound2 digitalmars.com> wrote:
 I didn't even know about this client before the breakage.
I'm really getting tired of this argument. An unknown client (which you still haven't named, so as far as I'm concerned it might as well be just a reddit troll) comes out of the blue, complains about some small breakage which can *easily* be fixed in a point release, and suddenly that has to affect the decision on final by default. Also, the client hasn't bothered to file a bug report, and 2.056 has been released for a few weeks (nevermind the massively long beta cycle). Why not do the obvious and just roll out the point release with the std.json fixes? I only see this as getting worse, however. I mean the whole idea of client X deciding to ring up Andrei or Walter, NDAs to not disclose their name, and make an executive decision on some language/phobos feature. Meanwhile, who's fixing the bugs and implementing features? People who are not on a payroll. So I think we the community and the developers have a right for a vote.
Mar 13 2014
next sibling parent reply "Peter Alexander" <peter.alexander.au gmail.com> writes:
On Thursday, 13 March 2014 at 08:16:50 UTC, Andrej Mitrovic wrote:
 On 3/13/14, Walter Bright <newshound2 digitalmars.com> wrote:
 I didn't even know about this client before the breakage.
I'm really getting tired of this argument. An unknown client (which you still haven't named, so as far as I'm concerned it might as well be just a reddit troll) comes out of the blue, complains about some small breakage which can *easily* be fixed in a point release, and suddenly that has to affect the decision on final by default. Also, the client hasn't bothered to file a bug report, and 2.056 has been released for a few weeks (nevermind the massively long beta cycle). Why not do the obvious and just roll out the point release with the std.json fixes? I only see this as getting worse, however. I mean the whole idea of client X deciding to ring up Andrei or Walter, NDAs to not disclose their name, and make an executive decision on some language/phobos feature. Meanwhile, who's fixing the bugs and implementing features? People who are not on a payroll. So I think we the community and the developers have a right for a vote.
To be honest, whether or not the client really exists is irrelevant. We can't just keep making large breaking changes. It's not just big companies that are effected either. Every breaking change potentially breaks some open source library. If that library is no longer maintained then it just stops working, and no one knows until a user comes along and tries to compile it. When it fails to compile, most users will just assume it doesn't work and move on. If that library was critical to their project then we probably lose a user. As for the release time and beta: most people aren't on the forums daily. They don't know this is happening. The people on this forum are not representative D users. I occasionally run Python scripts at work. I can assure you I have absolutely no idea when Python is going to get an update and I certainly have no idea when beta tests periods are being run!
Mar 13 2014
next sibling parent "Dicebot" <public dicebot.lv> writes:
On Thursday, 13 March 2014 at 08:47:13 UTC, Peter Alexander wrote:
 To be honest, whether or not the client really exists is 
 irrelevant. We can't just keep making large breaking changes.

 It's not just big companies that are effected either. Every 
 breaking change potentially breaks some open source library. If 
 that library is no longer maintained then it just stops 
 working, and no one knows until a user comes along and tries to 
 compile it. When it fails to compile, most users will just 
 assume it doesn't work and move on. If that library was 
 critical to their project then we probably lose a user.
It does effectively mean that we must stop development right here and now and never commit anything to dlang repos anymore. Because most breakage comes from bug-fixes for accepts-invalid. And it still renders unmaintained libraries unusable. Is that what you want?
 As for the release time and beta: most people aren't on the 
 forums daily. They don't know this is happening. The people on 
 this forum are not representative D users.

 I occasionally run Python scripts at work. I can assure you I 
 have absolutely no idea when Python is going to get an update 
 and I certainly have no idea when beta tests periods are being 
 run!
Python scripts sometimes break for me when changing versions between 2.4 and 2.8 for example. Any changes to default python version are usually announced as major update.
Mar 13 2014
prev sibling next sibling parent reply Andrej Mitrovic <andrej.mitrovich gmail.com> writes:
On 3/13/14, Peter Alexander <peter.alexander.au gmail.com> wrote:
 If that library is no longer maintained then it just stops working
Which is a good thing. Why would you use an umaintained library? There is no such thing as a perfect library that has all of its bugs fixed. No maintenance => dead project.
 As for the release time and beta: most people aren't on the
 forums daily. They don't know this is happening. The people on
 this forum are not representative D users.
Those who wish to represent themselves should represent themselves. You want to represent the D clients who: - Won't file bugs - Won't disclose their names - Won't disclose that they're using D - Won't contribute to D Exactly what makes them a worthwhile client to keep? Is it just in order to be able to privately brag about client X using your technology Z? I really don't understand how D "wins" with these ghost clients that do not represent themselves.
Mar 13 2014
parent "bearophile" <bearophileHUGS lycos.com> writes:
Andrej Mitrovic:

 You want to represent the D clients who:

 - Won't file bugs
 - Won't disclose their names
 - Won't disclose that they're using D
 - Won't contribute to D

 Exactly what makes them a worthwhile client to keep?
They grow the community of the D users, that eventually could buy D-related products like IDEs or plug-ins or lints, etc. They could buy Andrei book and give a little of money to Andrei. They could create D apps that sometimes become known and give some more visibility to D, and generally D is an open source tool, so you create it for people, often people that don't contribute back. So it's right to respect such people and "clients". But all this can't justify to stop carefully planned changes of D. Bye, bearophile
Mar 13 2014
prev sibling parent reply Andrej Mitrovic <andrej.mitrovich gmail.com> writes:
On 3/13/14, Andrej Mitrovic <andrej.mitrovich gmail.com> wrote:
 Those who wish to represent themselves should represent themselves.
Another important point to make: If client A says a specific changeset was bad, what gives client A priority over client B who says the same changeset is good? These clients need to communicate directly with the community and the developers via the forums and by filing bugs/enhancement reports. I believe clients (why are we calling them "clients" anyway?) should have to provide reasonable arguments for preferring one feature over another, beyond the simple "we're using D therefore we're important" argument.
Mar 13 2014
next sibling parent "Paulo Pinto" <pjmlp progtools.org> writes:
On Thursday, 13 March 2014 at 10:19:09 UTC, Andrej Mitrovic wrote:
 On 3/13/14, Andrej Mitrovic <andrej.mitrovich gmail.com> wrote:
 Those who wish to represent themselves should represent 
 themselves.
Another important point to make: If client A says a specific changeset was bad, what gives client A priority over client B who says the same changeset is good? These clients need to communicate directly with the community and the developers via the forums and by filing bugs/enhancement reports. I believe clients (why are we calling them "clients" anyway?) should have to provide reasonable arguments for preferring one feature over another, beyond the simple "we're using D therefore we're important" argument.
What forcing those people to reconsider their decision to use D if the changes become an hassle? For example, I do talk a lot here, because not only I do like D and its community but also am a language geek at heart. On my day job there is no chance I could ever use D, as we only do JVM/.NET/Mobile consultancy, the last C++ boat sailed around 2006. Sometimes I do advocate D though, but those people won't jump into a language they cannot consider stable enough. -- Paulo
Mar 13 2014
prev sibling parent reply "Daniele Vian" <dontreply example.com> writes:
On Thursday, 13 March 2014 at 10:19:09 UTC, Andrej Mitrovic wrote:
 Another important point to make: If client A says a specific 
 changeset
 was bad, what gives client A priority over client B who says 
 the same
 changeset is good? These clients need to  communicate directly 
 with
 the community and the developers via the forums and by filing
 bugs/enhancement reports.

 I believe clients (why are we calling them "clients" anyway?) 
 should
 have to provide reasonable arguments for preferring one feature 
 over
 another, beyond the simple "we're using D therefore we're 
 important"
 argument.
Just to be clear, I never had a "preferred" implementation, nor I would ever "lobby" for one as it's not in the best interest of anybody. Consider me a fan of breaking the code if it improves features, security and what not. The only thing I'm asking for is for it to be transparent: a point in the changelog and keeping the old one as deprecated for a reasonable amount of time, that's all. A bug was not filed as it didn't look like a bug but a design choice (undocumented). If reporting it directly to Walter Bright was the wrong thing to do, my apologies, but we surely didn't have any hidden "agenda" here. Thanks Daniele
Mar 13 2014
next sibling parent Andrej Mitrovic <andrej.mitrovich gmail.com> writes:
On 3/13/14, Daniele Vian <dontreply example.com> wrote:
 The only thing I'm asking for is for it to be transparent: a
 point in the changelog and keeping the old one as deprecated for
 a reasonable amount of time, that's all.
Yeah, that's what we're always trying to do. If something broke but wasn't mentioned in the documentation, please do file a bug. Thanks.
 If reporting it directly to Walter Bright was the wrong thing to
 do, my apologies, but we surely didn't have any hidden "agenda"
 here.
Walter made it sound as if you were wiling to stop using D because of a single regression.
Mar 13 2014
prev sibling parent reply "Daniel Murphy" <yebbliesnospam gmail.com> writes:
"Daniele Vian"  wrote in message 
news:qggzguvbwtpaipdmnsgq forum.dlang.org...

 Just to be clear, I never had a "preferred" implementation, nor I would 
 ever "lobby" for one as it's not in the best interest of anybody.
 Consider me a fan of breaking the code if it improves features, security 
 and what not.
 The only thing I'm asking for is for it to be transparent: a point in the 
 changelog and keeping the old one as deprecated for a reasonable amount of 
 time, that's all.
I think we can all agree on that.
 A bug was not filed as it didn't look like a bug but a design choice 
 (undocumented).
If it breaks your code without warning, it's usually worth filing a bug. Bugzilla is the best place to report possible regressions, as all the relevant people are paying attention. Worst case it gets closed with an explanation of why it was changed.
 If reporting it directly to Walter Bright was the wrong thing to do, my 
 apologies, but we surely didn't have any hidden "agenda" here.
There is nothing wrong with reporting directly to Walter, the problem is when Walter/Andrei then make decisions based on this information privately. Walter has been wrong about these things in the past, and without knowing who the third party is it's impossible to get clarification.
Mar 13 2014
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 3/13/14, 6:15 AM, Daniel Murphy wrote:
 There is nothing wrong with reporting directly to Walter, the problem is
 when Walter/Andrei then make decisions based on this information
 privately. Walter has been wrong about these things in the past, and
 without knowing who the third party is it's impossible to get
 clarification.
Just for the record that was given as an example, not a motivator. After much deliberation, we believe we are right in our decision to not make final the default. Andrei
Mar 13 2014
parent reply "Daniel Murphy" <yebbliesnospam gmail.com> writes:
"Andrei Alexandrescu"  wrote in message 
news:lfsja8$1rom$1 digitalmars.com...

 After much deliberation, we believe we are right in our decision to not 
 make final the default.
If I've absorbed the information correctly, you think the change is good but the breakage would be too large. This thread has had people from several 'industry' D users stating that they do not have a problem with well planned breaking changes, and I'm not sure why you feel differently about this. Why are you so sure this change is too much breakage to be acceptable? Where exactly is the line between 'worth it' and 'too big'?
Mar 13 2014
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 3/13/14, 9:23 AM, Daniel Murphy wrote:
 "Andrei Alexandrescu"  wrote in message
 news:lfsja8$1rom$1 digitalmars.com...

 After much deliberation, we believe we are right in our decision to
 not make final the default.
If I've absorbed the information correctly, you think the change is good but the breakage would be too large.
That is correct. If I did things over again, I'd probably favor defaulting to final. However, it is my belief that this particular design choice does not make nearly as dramatic a difference as some people seem to believe.
 This thread has had people from several 'industry' D users stating that
 they do not have a problem with well planned breaking changes, and I'm
 not sure why you feel differently about this.
I have seen those messages as well. I have argued at length why I feel differently about this, including how I see the numbers working in this forum. As far as I can tell you are not convinced, so repeating those arguments would not help.
 Why are you so sure this change is too much breakage to be acceptable?
I don't know how to answer this. Again, I can only assume that whatever justification I reiterate it's unlikely to help any.
 Where exactly is the line between 'worth it' and 'too big'?
I know a guy who knows exactly where it is. He's a liar :o). This is clearly a rhetorical question, but in this case I believe the change is not worth it. Andrei
Mar 13 2014
parent reply "Paolo Invernizzi" <paolo.invernizzi no.address> writes:
On Thursday, 13 March 2014 at 16:41:32 UTC, Andrei Alexandrescu 
wrote:
 On 3/13/14, 9:23 AM, Daniel Murphy wrote:

 This thread has had people from several 'industry' D users 
 stating that
 they do not have a problem with well planned breaking changes, 
 and I'm
 not sure why you feel differently about this.
I have seen those messages as well. I have argued at length why I feel differently about this, including how I see the numbers working in this forum. As far as I can tell you are not convinced, so repeating those arguments would not help.
I think that this it's a bit unfair. Just to be clear, we have committed a lot of money and effort in the D programming language, "smelling" years ago that it could be a competitive advantage over other choices. Told that, I'm following the forum as this is by far the best way to reinforce of undermine my past decision (and sleep well at night!) That why I think that, IMHO, companies that adopted D "seriously" are present here, and are lurking. Just to give a perspective, we are not so big like Sociomantic but we are making some $M, so for us the decision was not a joke. And to be honest what it's really scaring it's not the frequency of the "planned improvement" of the language, but that a feeling turned "a solid piece of evidence" [1] into smoke. Today is virtual, tomorrow how knows? That's my feedback for the community, and for the two leaders. - Paolo [1] http://forum.dlang.org/thread/yzsqwejxqlnzryhrkfuq forum.dlang.org?page=23#post-koo65g:241nqs:242:40digitalmars.com
Mar 13 2014
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 3/13/14, 10:21 AM, Paolo Invernizzi wrote:
 On Thursday, 13 March 2014 at 16:41:32 UTC, Andrei Alexandrescu wrote:
 On 3/13/14, 9:23 AM, Daniel Murphy wrote:

 This thread has had people from several 'industry' D users stating that
 they do not have a problem with well planned breaking changes, and I'm
 not sure why you feel differently about this.
I have seen those messages as well. I have argued at length why I feel differently about this, including how I see the numbers working in this forum. As far as I can tell you are not convinced, so repeating those arguments would not help.
I think that this it's a bit unfair. Just to be clear, we have committed a lot of money and effort in the D programming language, "smelling" years ago that it could be a competitive advantage over other choices.
I think that holds regardless of the decision in this particular matter.
 Told that, I'm following the forum as this is by far the best way to
 reinforce of undermine my past decision (and sleep well at night!)
 That why I think that, IMHO, companies that adopted D "seriously" are
 present here, and are lurking.
I don't think so. This isn't the case for my employer, and hasn't been the case historically for a lot of the companies using other languages. I have plenty of experience with forum dynamics to draw from.
 Just to give a perspective, we are not so big like Sociomantic but we
 are making some $M, so for us the decision was not a joke.
Again, it's unlikely the decision would have been in other way affected by a minor language design detail. The matter is you seem convinced final would improve your use of D, and therefore are unhappy with the decision. For those who aren't, we'd seem flaky by breaking their code.
 And to be honest what it's really scaring it's not the frequency of the
 "planned improvement" of the language, but that a feeling turned  "a
 solid piece of evidence" [1] into smoke. Today is virtual, tomorrow how
 knows?

 That's my feedback for the community, and for the two leaders.

 - Paolo

 [1]
 http://forum.dlang.org/thread/yzsqwejxqlnzryhrkfuq forum.dlang.org?page=23#post-koo65g:241nqs:242:40digitalmars.com
Now I think you're being unfair. Yes, it was a good piece of evidence. And yes, it turned out to be not enough. It's that simple and that visible. What, are Walter and me doing cover-up machinations now??? There must be a way to convey that a decision has been made. It is understood it won't please everybody, just like going the other way won't please everybody. Please let me know what that way is. Thanks, Andrei
Mar 13 2014
next sibling parent reply Andrej Mitrovic <andrej.mitrovich gmail.com> writes:
On 3/13/14, Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> wrote:
 There must be a way to convey that a decision has been made. It is
 understood it won't please everybody, just like going the other way
 won't please everybody. Please let me know what that way is.
Voting.
Mar 13 2014
next sibling parent reply "Steve Teale" <steve.teale britseyeview.com> writes:
On Thursday, 13 March 2014 at 18:03:42 UTC, Andrej Mitrovic wrote:
 On 3/13/14, Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> 
 wrote:
 There must be a way to convey that a decision has been made. 
 It is
 understood it won't please everybody, just like going the 
 other way
 won't please everybody. Please let me know what that way is.
Voting.
I recall someone telling me already that this is not a democracy. Even if we had it I think some sane person would have to make a choice on a 51/49 decision to change something. It would have to be 75% or whatever. Who is to write the constitution? Steve
Mar 13 2014
parent "deadalnix" <deadalnix gmail.com> writes:
On Thursday, 13 March 2014 at 18:13:29 UTC, Steve Teale wrote:
 On Thursday, 13 March 2014 at 18:03:42 UTC, Andrej Mitrovic 
 wrote:
 On 3/13/14, Andrei Alexandrescu 
 <SeeWebsiteForEmail erdani.org> wrote:
 There must be a way to convey that a decision has been made. 
 It is
 understood it won't please everybody, just like going the 
 other way
 won't please everybody. Please let me know what that way is.
Voting.
I recall someone telling me already that this is not a democracy. Even if we had it I think some sane person would have to make a choice on a 51/49 decision to change something. It would have to be 75% or whatever. Who is to write the constitution? Steve
Technically, this is a democracy. Nobody is forcing anyone to do or use anything. Anyone can do a modified version of D and whoever likes it uses it. This is true democracy. I think you are confusing democracy and dictatorship of the majority.
Mar 13 2014
prev sibling next sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 3/13/14, 11:03 AM, Andrej Mitrovic wrote:
 On 3/13/14, Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> wrote:
 There must be a way to convey that a decision has been made. It is
 understood it won't please everybody, just like going the other way
 won't please everybody. Please let me know what that way is.
Voting.
Voting is for making a decision, not for conveying it. Voting in programming design matters has two issues: 1. Representation: the set of people on the D forum is too informal a representative. International standardization bodies do use voting, but their framework is considerably more formal (not to mention they have their own liabilities). 2. There's the danger of getting into a design-by-committee rut. Andrei
Mar 13 2014
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 3/13/2014 11:41 AM, Andrei Alexandrescu wrote:
 2. There's the danger of getting into a design-by-committee rut.
Back in the early days of the C++ Standards committee, I recall some members negotiating in effect "vote for my proposal and I'll vote for yours". I don't see that as a great way to design a language. Democratic committee processes also involve long, and I mean loooong, timespans for making decisions. Like 13 years from C++98 to C++11.
Mar 13 2014
next sibling parent "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Thu, Mar 13, 2014 at 02:50:08PM -0700, Walter Bright wrote:
 On 3/13/2014 11:41 AM, Andrei Alexandrescu wrote:
2. There's the danger of getting into a design-by-committee rut.
Back in the early days of the C++ Standards committee, I recall some members negotiating in effect "vote for my proposal and I'll vote for yours". I don't see that as a great way to design a language. Democratic committee processes also involve long, and I mean loooong, timespans for making decisions. Like 13 years from C++98 to C++11.
Democratic processes only work well if the least unpopular choice equals the optimal choice. When this is not the case, it consistently produces suboptimal results. (I say "least unpopular", because it usually turns out that people in a democratic system simply cannot come to an agreement, so the only way forward is to find a choice that displeases everyone the least. This leads to disappointing results when it comes to technical design.) T -- "I'm not childish; I'm just in touch with the child within!" - RL
Mar 13 2014
prev sibling parent reply "Sean Kelly" <sean invisibleduck.org> writes:
On Thursday, 13 March 2014 at 21:50:10 UTC, Walter Bright wrote:
 On 3/13/2014 11:41 AM, Andrei Alexandrescu wrote:
 2. There's the danger of getting into a design-by-committee 
 rut.
Back in the early days of the C++ Standards committee, I recall some members negotiating in effect "vote for my proposal and I'll vote for yours". I don't see that as a great way to design a language. Democratic committee processes also involve long, and I mean loooong, timespans for making decisions. Like 13 years from C++98 to C++11.
To be pedantic, there was a TC released in 2003. And many of the C++11 features were available years ahead of time from all the usual library sources. But I agree that the ISO process isn't fast. In fact, I think that's an actual goal, as the industries that depend on these standards are slow-moving as well.
Mar 13 2014
parent Walter Bright <newshound2 digitalmars.com> writes:
On 3/13/2014 4:10 PM, Sean Kelly wrote:
 To be pedantic, there was a TC released in 2003.  And many of the
 C++11 features were available years ahead of time from all the
 usual library sources.  But I agree that the ISO process isn't
 fast.  In fact, I think that's an actual goal, as the industries
 that depend on these standards are slow-moving as well.
Of course. Slow is not always a bad thing.
Mar 13 2014
prev sibling parent "Abdulhaq" <alynch4047 gmail.com> writes:
On Thursday, 13 March 2014 at 18:03:42 UTC, Andrej Mitrovic wrote:
 On 3/13/14, Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> 
 wrote:
 There must be a way to convey that a decision has been made. 
 It is
 understood it won't please everybody, just like going the 
 other way
 won't please everybody. Please let me know what that way is.
Voting.
Good programming languages have a coherent design, orthogonal features, a clean philosophy, approach... they subscribe to chosen programming models such as procedural, functional, message-passing, garbage collected, dynamically typed, strongly typed.... as the designer intended. Having a 'they who shout the loudest win' or even a voting system destroys that coherency and uniform philsophy IMHO. I don't bother with C++ because I read Herb Sutter's GOTW column from time and time and think to myself "I don't want to need to know all these very subtle issues and gotchas in my programming language". The syntax etc. of C++ I can cope with. Now, I think that Andrei and Walter are D's best chance to shepherd D away from the C++ gotcha morass. Voting on features to change / add would throw D into it. In my view adding features is particularly pernicious as they complicate the language in non-orthogonal ways leading to the need to know a broader and more complex language than necessary.
Mar 13 2014
prev sibling parent reply "Paolo Invernizzi" <paolo.invernizzi no.address> writes:
On Thursday, 13 March 2014 at 17:56:09 UTC, Andrei Alexandrescu 
wrote:
 On 3/13/14, 10:21 AM, Paolo Invernizzi wrote:

 Told that, I'm following the forum as this is by far the best 
 way to
 reinforce of undermine my past decision (and sleep well at 
 night!)
 That why I think that, IMHO, companies that adopted D 
 "seriously" are
 present here, and are lurking.
I don't think so. This isn't the case for my employer, and hasn't been the case historically for a lot of the companies using other languages. I have plenty of experience with forum dynamics to draw from.
So we disagree on that, and that's fine to me, but this don't change the fact your presence here turns your employer Facebook well represented in the forum now that it has something committed with D, IMHO...
 Just to give a perspective, we are not so big like Sociomantic 
 but we
 are making some $M, so for us the decision was not a joke.
Again, it's unlikely the decision would have been in other way affected by a minor language design detail. The matter is you seem convinced final would improve your use of D, and therefore are unhappy with the decision. For those who aren't, we'd seem flaky by breaking their code.
As I've stated, it is not about the single decision, I don't care about final vs virtual in our code. it's about the whole way that "planned improvement" changes to the language are managed.
 And to be honest what it's really scaring it's not the 
 frequency of the
 "planned improvement" of the language, but that a feeling 
 turned  "a
 solid piece of evidence" [1] into smoke. Today is virtual, 
 tomorrow how
 knows?

 [1]
 http://forum.dlang.org/thread/yzsqwejxqlnzryhrkfuq forum.dlang.org?page=23#post-koo65g:241nqs:242:40digitalmars.com
Now I think you're being unfair. Yes, it was a good piece of evidence. And yes, it turned out to be not enough. It's that simple and that visible. What, are Walter and me doing cover-up machinations now???
I'm not a native english speakers, but It doesn't seems to me that the meaning of what I wrote was that D is driven by a machinations. What I was meaning is: why the past mega-thread about virtual vs final (that I don't care about!) that seemed (to me!) that placed a concrete direction goal was (to me!) scraped like a thunder in clean sky. Where's the discussion why "it turned out to be not enough"? What scares me (as a company using the language) was that I wasn't able to "grasp" that fact in forum till now. So, that could also happen to *other* aspect of the language that a care for my business, without even having the ability do discuss about the motivation of a decision.
 There must be a way to convey that a decision has been made. It 
 is understood it won't please everybody, just like going the 
 other way won't please everybody. Please let me know what that 
 way is.
Again, the whole point was that it seemed to me that a decision was taken in that famous thread. My feedback, take it as you want Andrei, it is that such behaviours are a way more scaring that the hole point of managing a "planned" (again!) language change. Thanks, - Paolo
Mar 13 2014
next sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 3/13/14, 11:37 AM, Paolo Invernizzi wrote:
 As I've stated, it is not about the single decision, I don't care about
 final vs virtual in our code. it's about the whole way that "planned
 improvement" changes to the language are managed.
Got it, thanks.
 What I was meaning is: why the past mega-thread about virtual vs final
 (that I don't care about!) that seemed (to me!) that placed a concrete
 direction goal was (to me!) scraped like a thunder in clean sky.

 Where's the discussion why "it turned out to be not enough"?

 What scares me (as a company using the language) was that I wasn't able
 to "grasp" that fact in forum till now.

 So, that could also happen to *other* aspect of the language that a care
 for my business, without even having the ability do discuss about the
 motivation of a decision.

 There must be a way to convey that a decision has been made. It is
 understood it won't please everybody, just like going the other way
 won't please everybody. Please let me know what that way is.
Again, the whole point was that it seemed to me that a decision was taken in that famous thread. My feedback, take it as you want Andrei, it is that such behaviours are a way more scaring that the hole point of managing a "planned" (again!) language change.
Understood. That's one angle. The other angle is that a small but vocal faction can intimidate the language leadership to effect a large breaking change that it doesn't believe in. Also let's not forget that a bunch of people will have not had contact with the group and will not have read the respective thread. For them -- happy campers who get work done in D day in and day out, feeling no speed impact whatsoever from a virtual vs. final decision -- we are simply exercising the brunt of a deprecation cycle with undeniable costs and questionable (in Walter's and my opinion) benefits. Andrei
Mar 13 2014
next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 3/13/2014 1:09 PM, Andrei Alexandrescu wrote:
 Also let's not forget that a bunch of people will have not had contact with the
 group and will not have read the respective thread. For them -- happy campers
 who get work done in D day in and day out, feeling no speed impact whatsoever
 from a virtual vs. final decision -- we are simply exercising the brunt of a
 deprecation cycle with undeniable costs and questionable (in Walter's and my
 opinion) benefits.
Also, class C { final: ... } achieves final-by-default and it breaks nothing.
Mar 13 2014
next sibling parent reply "Daniel Murphy" <yebbliesnospam gmail.com> writes:
"Walter Bright"  wrote in message news:lft8ok$2epl$1 digitalmars.com...

 Also,

      class C { final: ... }

 achieves final-by-default and it breaks nothing.
No, it doesn't, because it is not usable if C introduces any virtual methods. On the other hand, class C { virtual: ... } _does_ trivially bring back virtual-by-default. The idea that over the course of (at least) a year, having to add this to some of your classes (that the compiler identifies) is large and unacceptable breakage is just nonsense. Eg In dmd's OO-heavy code, ~13% of the classes introduce a virtual method. In those 44 classes, every single non-virtual method needs to be marked as final (~700 methods)
Mar 13 2014
next sibling parent reply captaindet <2krnk gmx.net> writes:
On 2014-03-13 22:40, Daniel Murphy wrote:
 On the other hand,

 class C { virtual: ... }

 _does_ trivially bring back virtual-by-default.
is this all it takes? i mean, after switching to final-by-default, going through anybodies codebase and blindly adding virtual at the very beginning of each class definition reestablishes to 100% old behavior? a regex s&r can probably do this. or a tiny tool. this is a step everyone can readily subscribe to. if it is that easy i don't see any grounds for a too-breaking-change argument. /det
Mar 13 2014
parent "deadalnix" <deadalnix gmail.com> writes:
On Friday, 14 March 2014 at 04:40:56 UTC, captaindet wrote:
 On 2014-03-13 22:40, Daniel Murphy wrote:
 On the other hand,

 class C { virtual: ... }

 _does_ trivially bring back virtual-by-default.
is this all it takes? i mean, after switching to final-by-default, going through anybodies codebase and blindly adding virtual at the very beginning of each class definition reestablishes to 100% old behavior? a regex s&r can probably do this. or a tiny tool. this is a step everyone can readily subscribe to. if it is that easy i don't see any grounds for a too-breaking-change argument. /det
Yes sure. Including in boilerplate generated via mixins, obviously.
Mar 13 2014
prev sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 3/13/2014 8:40 PM, Daniel Murphy wrote:
 "Walter Bright"  wrote in message news:lft8ok$2epl$1 digitalmars.com...

 Also,

      class C { final: ... }

 achieves final-by-default and it breaks nothing.
No, it doesn't, because it is not usable if C introduces any virtual methods.
That's what the !final storage class is for.
Mar 13 2014
next sibling parent reply Manu <turkeyman gmail.com> writes:
On 14 March 2014 16:20, Walter Bright <newshound2 digitalmars.com> wrote:

 On 3/13/2014 8:40 PM, Daniel Murphy wrote:

 "Walter Bright"  wrote in message news:lft8ok$2epl$1 digitalmars.com...

  Also,
      class C { final: ... }

 achieves final-by-default and it breaks nothing.
No, it doesn't, because it is not usable if C introduces any virtual methods.
That's what the !final storage class is for.
Please don't consider !final, that's like pouring lemon juice on the would. Use virtual, it's a keyword that everybody knows and expects, and already exists in peoples code that they might be migrating to D.
Mar 13 2014
next sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 3/13/14, 11:30 PM, Manu wrote:
 On 14 March 2014 16:20, Walter Bright <newshound2 digitalmars.com
 <mailto:newshound2 digitalmars.com>> wrote:

     On 3/13/2014 8:40 PM, Daniel Murphy wrote:

         "Walter Bright"  wrote in message
         news:lft8ok$2epl$1 __digitalmars.com...

             Also,

                   class C { final: ... }

             achieves final-by-default and it breaks nothing.


         No, it doesn't, because it is not usable if C introduces any
         virtual methods.


     That's what the !final storage class is for.


 Please don't consider !final, that's like pouring lemon juice on the
 would.
The "wound" should have nothing to do with the decision. It would be a mistake to add a keyword here because "well, we had to give them something". The converse of final doesn't deserve its own keyword. Also we do need a means to negate others, too.
 Use virtual, it's a keyword that everybody knows and expects, and
 already exists in peoples code that they might be migrating to D.
Andrei
Mar 14 2014
next sibling parent "deadalnix" <deadalnix gmail.com> writes:
On Friday, 14 March 2014 at 07:06:53 UTC, Andrei Alexandrescu 
wrote:
 Use virtual, it's a keyword that everybody knows and expects, 
 and
 already exists in peoples code that they might be migrating to 
 D.
Yes, virtual by default is pretty what come by default nowadays.
Mar 14 2014
prev sibling parent reply "monarch_dodra" <monarchdodra gmail.com> writes:
On Friday, 14 March 2014 at 07:06:53 UTC, Andrei Alexandrescu 
wrote:
 Also we do need a means to negate others, too.

 Andrei
I was going to ask "how many others" are there, but I came up with this list myself. Probably incomplete: static const/immtable (no "mutable") keyword pure nothrow To a certain extent, there's also safe/ system, in the sense that it might make sense to "negate" them, rather than "override" them with another keyword. Still, when writing something like: class A { system nothrow: !nothrow ! system void foo(); } I really reads awful: "not nothrow"/"not system"?
Mar 14 2014
parent Daniel =?ISO-8859-1?Q?Koz=E1k?= <kozzi11 gmail.com> writes:
monarch_dodra píše v Pá 14. 03. 2014 v 07:41 +0000:
 On Friday, 14 March 2014 at 07:06:53 UTC, Andrei Alexandrescu 
 wrote:
 Also we do need a means to negate others, too.

 Andrei
I was going to ask "how many others" are there, but I came up with this list myself. Probably incomplete: static const/immtable (no "mutable") keyword pure nothrow To a certain extent, there's also safe/ system, in the sense that it might make sense to "negate" them, rather than "override" them with another keyword. Still, when writing something like: class A { system nothrow: !nothrow ! system void foo(); } I really reads awful: "not nothrow"/"not system"?
class A { system nothrow: disable(nothrow, system) void foo(); } or class A { system nothrow: disable(all) void foo(); }
Mar 14 2014
prev sibling parent "Dicebot" <public dicebot.lv> writes:
On Friday, 14 March 2014 at 06:30:55 UTC, Manu wrote:
 Please don't consider !final, that's like pouring lemon juice 
 on the would.
 Use virtual, it's a keyword that everybody knows and expects, 
 and already
 exists in peoples code that they might be migrating to D.
!final scale much better, as it is consistent pattern that can be used for any attribute. I like it much more than virtual. Sounds weird though. Having `virtual` and `!virtual` would have better option but is not feasible.
Mar 14 2014
prev sibling parent reply "Daniel Murphy" <yebbliesnospam gmail.com> writes:
"Walter Bright"  wrote in message news:lfu74a$8cr$1 digitalmars.com...

 No, it doesn't, because it is not usable if C introduces any virtual 
 methods.
That's what the !final storage class is for.
My mistake, I forgot you'd said you were in favor of this. Being able to 'escape' final certainly gets us most of the way there. !final is really rather hideous though.
Mar 14 2014
next sibling parent "Regan Heath" <regan netmail.co.nz> writes:
On Fri, 14 Mar 2014 11:37:07 -0000, Daniel Murphy  
<yebbliesnospam gmail.com> wrote:

 "Walter Bright"  wrote in message news:lfu74a$8cr$1 digitalmars.com...

 No, it doesn't, because it is not usable if C introduces any virtual  
 methods.
That's what the !final storage class is for.
My mistake, I forgot you'd said you were in favor of this. Being able to 'escape' final certainly gets us most of the way there. !final is really rather hideous though.
+1 eew. R -- Using Opera's revolutionary email client: http://www.opera.com/mail/
Mar 14 2014
prev sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 3/14/14, 4:37 AM, Daniel Murphy wrote:
 "Walter Bright"  wrote in message news:lfu74a$8cr$1 digitalmars.com...

 No, it doesn't, because it is not usable if C introduces any virtual
 methods.
That's what the !final storage class is for.
My mistake, I forgot you'd said you were in favor of this. Being able to 'escape' final certainly gets us most of the way there. !final is really rather hideous though.
A few possibilities discussed around here: !final ~final final(false) disable final I've had an epiphany literally a few seconds ago that "final(false)" has the advantage of being generalizable to "final(bool)" taking any CTFE-able Boolean. On occasion I needed a computed qualifier (I think there's code in Phobos like that) and the only way I could do it was through ugly code duplication or odd mixin-generated code. Allowing computed qualifiers/attributes would be a very elegant and general approach, and plays beautifully into the strength of D and our current investment in Boolean compile-time predicates. Andrei
Mar 14 2014
next sibling parent "Jakob Ovrum" <jakobovrum gmail.com> writes:
On Friday, 14 March 2014 at 15:17:08 UTC, Andrei Alexandrescu 
wrote:
 I've had an epiphany literally a few seconds ago that 
 "final(false)" has the advantage of being generalizable to 
 "final(bool)" taking any CTFE-able Boolean.

 On occasion I needed a computed qualifier (I think there's code 
 in Phobos like that) and the only way I could do it was through 
 ugly code duplication or odd mixin-generated code. Allowing 
 computed qualifiers/attributes would be a very elegant and 
 general approach, and plays beautifully into the strength of D 
 and our current investment in Boolean compile-time predicates.


 Andrei
+1 for this approach. It's also another step towards perfect forwarding without using string mixin declarations.
Mar 14 2014
prev sibling next sibling parent "John Colvin" <john.loughran.colvin gmail.com> writes:
On Friday, 14 March 2014 at 15:17:08 UTC, Andrei Alexandrescu 
wrote:
 On 3/14/14, 4:37 AM, Daniel Murphy wrote:
 "Walter Bright"  wrote in message 
 news:lfu74a$8cr$1 digitalmars.com...

 No, it doesn't, because it is not usable if C introduces 
 any virtual
 methods.
That's what the !final storage class is for.
My mistake, I forgot you'd said you were in favor of this. Being able to 'escape' final certainly gets us most of the way there. !final is really rather hideous though.
A few possibilities discussed around here: !final ~final final(false) disable final I've had an epiphany literally a few seconds ago that "final(false)" has the advantage of being generalizable to "final(bool)" taking any CTFE-able Boolean. On occasion I needed a computed qualifier (I think there's code in Phobos like that) and the only way I could do it was through ugly code duplication or odd mixin-generated code. Allowing computed qualifiers/attributes would be a very elegant and general approach, and plays beautifully into the strength of D and our current investment in Boolean compile-time predicates. Andrei
Great idea.
Mar 14 2014
prev sibling next sibling parent "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Fri, 14 Mar 2014 11:17:08 -0400, Andrei Alexandrescu  
<SeeWebsiteForEmail erdani.org> wrote:

 On 3/14/14, 4:37 AM, Daniel Murphy wrote:
 "Walter Bright"  wrote in message news:lfu74a$8cr$1 digitalmars.com...

 No, it doesn't, because it is not usable if C introduces any virtual
 methods.
That's what the !final storage class is for.
My mistake, I forgot you'd said you were in favor of this. Being able to 'escape' final certainly gets us most of the way there. !final is really rather hideous though.
A few possibilities discussed around here: !final ~final final(false) disable final I've had an epiphany literally a few seconds ago that "final(false)" has the advantage of being generalizable to "final(bool)" taking any CTFE-able Boolean.
Yes yes yes! Consider also final!false (i.e. parameterize final) -Steve
Mar 14 2014
prev sibling next sibling parent reply "Namespace" <rswhite4 googlemail.com> writes:
On Friday, 14 March 2014 at 15:17:08 UTC, Andrei Alexandrescu 
wrote:
 On 3/14/14, 4:37 AM, Daniel Murphy wrote:
 "Walter Bright"  wrote in message 
 news:lfu74a$8cr$1 digitalmars.com...

 No, it doesn't, because it is not usable if C introduces 
 any virtual
 methods.
That's what the !final storage class is for.
My mistake, I forgot you'd said you were in favor of this. Being able to 'escape' final certainly gets us most of the way there. !final is really rather hideous though.
A few possibilities discussed around here: !final ~final final(false) disable final I've had an epiphany literally a few seconds ago that "final(false)" has the advantage of being generalizable to "final(bool)" taking any CTFE-able Boolean. On occasion I needed a computed qualifier (I think there's code in Phobos like that) and the only way I could do it was through ugly code duplication or odd mixin-generated code. Allowing computed qualifiers/attributes would be a very elegant and general approach, and plays beautifully into the strength of D and our current investment in Boolean compile-time predicates. Andrei
Yeah. My idea is popular.
Mar 14 2014
parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 3/14/14, 8:30 AM, Namespace wrote:
 On Friday, 14 March 2014 at 15:17:08 UTC, Andrei Alexandrescu wrote:
 On 3/14/14, 4:37 AM, Daniel Murphy wrote:
 "Walter Bright"  wrote in message news:lfu74a$8cr$1 digitalmars.com...

 No, it doesn't, because it is not usable if C introduces > any
virtual
 methods.
That's what the !final storage class is for.
My mistake, I forgot you'd said you were in favor of this. Being able to 'escape' final certainly gets us most of the way there. !final is really rather hideous though.
A few possibilities discussed around here: !final ~final final(false) disable final I've had an epiphany literally a few seconds ago that "final(false)" has the advantage of being generalizable to "final(bool)" taking any CTFE-able Boolean. On occasion I needed a computed qualifier (I think there's code in Phobos like that) and the only way I could do it was through ugly code duplication or odd mixin-generated code. Allowing computed qualifiers/attributes would be a very elegant and general approach, and plays beautifully into the strength of D and our current investment in Boolean compile-time predicates. Andrei
Yeah. My idea is popular.
Apologies for not having seen it! Andrei
Mar 14 2014
prev sibling next sibling parent reply "ponce" <contact gam3sfrommars.fr> writes:
On Friday, 14 March 2014 at 15:17:08 UTC, Andrei Alexandrescu 
wrote:
 Allowing computed qualifiers/attributes would be a very elegant 
 and general approach, and plays beautifully into the strength 
 of D and our current investment in Boolean compile-time 
 predicates.
Bonus points if inout can be replaced that way :)
Mar 14 2014
parent Timon Gehr <timon.gehr gmx.ch> writes:
On 03/14/2014 04:31 PM, ponce wrote:
 On Friday, 14 March 2014 at 15:17:08 UTC, Andrei Alexandrescu wrote:
 Allowing computed qualifiers/attributes would be a very elegant and
 general approach, and plays beautifully into the strength of D and our
 current investment in Boolean compile-time predicates.
Bonus points if inout can be replaced that way :)
It cannot.
Mar 16 2014
prev sibling next sibling parent Dmitry Olshansky <dmitry.olsh gmail.com> writes:
14-Mar-2014 19:17, Andrei Alexandrescu пишет:
 On 3/14/14, 4:37 AM, Daniel Murphy wrote:
 "Walter Bright"  wrote in message news:lfu74a$8cr$1 digitalmars.com...

 No, it doesn't, because it is not usable if C introduces any virtual
 methods.
That's what the !final storage class is for.
My mistake, I forgot you'd said you were in favor of this. Being able to 'escape' final certainly gets us most of the way there. !final is really rather hideous though.
A few possibilities discussed around here: !final ~final final(false) disable final I've had an epiphany literally a few seconds ago that "final(false)" has the advantage of being generalizable to "final(bool)" taking any CTFE-able Boolean. On occasion I needed a computed qualifier (I think there's code in Phobos like that) and the only way I could do it was through ugly code duplication or odd mixin-generated code. Allowing computed qualifiers/attributes would be a very elegant and general approach, and plays beautifully into the strength of D and our current investment in Boolean compile-time predicates.
+1 for qualifier(bool_expression) -- Dmitry Olshansky
Mar 14 2014
prev sibling next sibling parent Michel Fortin <michel.fortin michelf.ca> writes:
On 2014-03-14 15:17:08 +0000, Andrei Alexandrescu 
<SeeWebsiteForEmail erdani.org> said:

 A few possibilities discussed around here:
 
 !final
 ~final
 final(false)
  disable final
 
 I've had an epiphany literally a few seconds ago that "final(false)" 
 has the advantage of being generalizable to "final(bool)" taking any 
 CTFE-able Boolean.
 
 On occasion I needed a computed qualifier (I think there's code in 
 Phobos like that) and the only way I could do it was through ugly code 
 duplication or odd mixin-generated code. Allowing computed 
 qualifiers/attributes would be a very elegant and general approach, and 
 plays beautifully into the strength of D and our current investment in 
 Boolean compile-time predicates.
final(bool) is my preferred solution too. It certainly is more verbose than 'virtual', but it offers more possibilities. Also, the pattern is already somewhat established with align(int). -- Michel Fortin michel.fortin michelf.ca http://michelf.ca
Mar 14 2014
prev sibling next sibling parent reply "Daniel Murphy" <yebbliesnospam gmail.com> writes:
"Andrei Alexandrescu"  wrote in message 
news:lfv6hk$12su$1 digitalmars.com...

 A few possibilities discussed around here:

 !final
 ~final
 final(false)
None of those are as nice as 'virtual'. It's not like it's a common variable name.
  disable final
Nope, already valid.
 I've had an epiphany literally a few seconds ago that "final(false)" has 
 the advantage of being generalizable to "final(bool)" taking any CTFE-able 
 Boolean.

 On occasion I needed a computed qualifier (I think there's code in Phobos 
 like that) and the only way I could do it was through ugly code 
 duplication or odd mixin-generated code. Allowing computed 
 qualifiers/attributes would be a very elegant and general approach, and 
 plays beautifully into the strength of D and our current investment in 
 Boolean compile-time predicates.
This is a much bigger change than adding a new storage class, both from the language and implementation perspectives. How often do you really need this much power, now that we have attribute inference?
Mar 14 2014
parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 3/14/14, 8:43 AM, Daniel Murphy wrote:
 "Andrei Alexandrescu"  wrote in message
 news:lfv6hk$12su$1 digitalmars.com...
 On occasion I needed a computed qualifier (I think there's code in
 Phobos like that) and the only way I could do it was through ugly code
 duplication or odd mixin-generated code. Allowing computed
 qualifiers/attributes would be a very elegant and general approach,
 and plays beautifully into the strength of D and our current
 investment in Boolean compile-time predicates.
This is a much bigger change than adding a new storage class, both from the language and implementation perspectives.
We can just start with final(false). The point is the syntax offers a path to progress.
 How often do you really
 need this much power, now that we have attribute inference?
I'm not sure. It's a good question. Andrei
Mar 14 2014
prev sibling next sibling parent reply Daniel =?ISO-8859-1?Q?Koz=E1k?= <kozzi11 gmail.com> writes:
Andrei Alexandrescu píše v Pá 14. 03. 2014 v 08:17 -0700:
 On 3/14/14, 4:37 AM, Daniel Murphy wrote:
 "Walter Bright"  wrote in message news:lfu74a$8cr$1 digitalmars.com...

 No, it doesn't, because it is not usable if C introduces any virtual
 methods.
That's what the !final storage class is for.
My mistake, I forgot you'd said you were in favor of this. Being able to 'escape' final certainly gets us most of the way there. !final is really rather hideous though.
A few possibilities discussed around here: !final ~final final(false) disable final I've had an epiphany literally a few seconds ago that "final(false)" has the advantage of being generalizable to "final(bool)" taking any CTFE-able Boolean. On occasion I needed a computed qualifier (I think there's code in Phobos like that) and the only way I could do it was through ugly code duplication or odd mixin-generated code. Allowing computed qualifiers/attributes would be a very elegant and general approach, and plays beautifully into the strength of D and our current investment in Boolean compile-time predicates. Andrei
First I think have something like disable(final,nothrow) would be the best way, but than I think about it and realize that final(false) is much more better. Only advantege of disable(all) or disable(something, something_else) is we can disable more things more easily. But I almost never have needed this.
Mar 14 2014
parent reply "deadalnix" <deadalnix gmail.com> writes:
On Friday, 14 March 2014 at 22:06:13 UTC, Daniel Kozák wrote:
 First I think have something like

  disable(final,nothrow) would be the best way, but than I think 
 about it
 and realize that final(false) is much more better.
If I may, final!false . We have a syntax for compile time parameter. Let's be consistent for once. The concept is solid and is the way to go. DIP anyone ?
Mar 14 2014
next sibling parent reply Daniel =?ISO-8859-1?Q?Koz=E1k?= <kozzi11 gmail.com> writes:
deadalnix píše v Pá 14. 03. 2014 v 22:25 +0000:
 On Friday, 14 March 2014 at 22:06:13 UTC, Daniel Kozák wrote:
 First I think have something like

  disable(final,nothrow) would be the best way, but than I think 
 about it
 and realize that final(false) is much more better.
If I may, final!false . We have a syntax for compile time parameter. Let's be consistent for once. The concept is solid and is the way to go. DIP anyone ?
final!true final!(true) final(!true) oops :)
Mar 15 2014
parent "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Sat, 15 Mar 2014 04:08:51 -0400, Daniel Koz=C3=A1k <kozzi11 gmail.com=
 wrote:
 deadalnix p=C3=AD=C5=A1e v P=C3=A1 14. 03. 2014 v 22:25 +0000:
 On Friday, 14 March 2014 at 22:06:13 UTC, Daniel Koz=C3=A1k wrote:
 First I think have something like

  disable(final,nothrow) would be the best way, but than I think
 about it
 and realize that final(false) is much more better.
If I may, final!false . We have a syntax for compile time parameter. Let's be consistent for once. The concept is solid and is the way to go. DIP anyone ?
final!true final!(true) final(!true) oops :)
If final!true is valid, final(true) and final(!true) will not be. -Steve
Mar 15 2014
prev sibling parent reply Rainer Schuetze <r.sagitario gmx.de> writes:
On 14.03.2014 23:25, deadalnix wrote:
 On Friday, 14 March 2014 at 22:06:13 UTC, Daniel Kozák wrote:
 First I think have something like

  disable(final,nothrow) would be the best way, but than I think about it
 and realize that final(false) is much more better.
If I may, final!false . We have a syntax for compile time parameter. Let's be consistent for once. The concept is solid and is the way to go. DIP anyone ?
To me, it's not a decision "final or virtual", but "final, virtual or override", so a boolean doesn't work. final!false could infer "virtual or override", but then it would loose the explicitness of introducing or overriding virtual. I'm in favor of adding the keyword "virtual", it is known by many from other languages with the identical meaning. Using anything else feels like having to invent something different because of being opposed to it at the start. Adding compile time evaluation of function attributes is still worth considering, but I'd like a more generic approach, perhaps something along a mixin functionality: enum WINAPI = "export extern(Windows)"; functionAttributes!WINAPI HANDLE GetCurrentProcess(); though I would prefer avoiding string mixins, maybe by providing a function type as prototype: alias export extern(Windows) void function() fnWINAPI; functionAttributesOf!fnWINAPI HANDLE GetCurrentProcess();
Mar 16 2014
next sibling parent reply Manu <turkeyman gmail.com> writes:
On 16 March 2014 21:28, Rainer Schuetze <r.sagitario gmx.de> wrote:

 On 14.03.2014 23:25, deadalnix wrote:

 On Friday, 14 March 2014 at 22:06:13 UTC, Daniel Koz=C3=A1k wrote:

 First I think have something like

  disable(final,nothrow) would be the best way, but than I think about i=
t
 and realize that final(false) is much more better.
If I may, final!false . We have a syntax for compile time parameter. Let's be consistent for once. The concept is solid and is the way to go. DIP anyone ?
To me, it's not a decision "final or virtual", but "final, virtual or override", so a boolean doesn't work. final!false could infer "virtual or override", but then it would loose the explicitness of introducing or overriding virtual. I'm in favor of adding the keyword "virtual", it is known by many from other languages with the identical meaning. Using anything else feels lik=
e
 having to invent something different because of being opposed to it at th=
e
 start.

 Adding compile time evaluation of function attributes is still worth
 considering, but I'd like a more generic approach, perhaps something alon=
g
 a mixin functionality:

 enum WINAPI =3D "export extern(Windows)";

  functionAttributes!WINAPI HANDLE GetCurrentProcess();

 though I would prefer avoiding string mixins, maybe by providing a
 function type as prototype:

 alias export extern(Windows) void function() fnWINAPI;

  functionAttributesOf!fnWINAPI HANDLE GetCurrentProcess();
I frequently find myself needing something like this. What's wrong with aliasing attributes directly? DGC/LDC offer their own internal attributes, but you can't make use of them and remain portable without an ability to do something like the #define hack in C.
Mar 16 2014
parent Rainer Schuetze <r.sagitario gmx.de> writes:
On 16.03.2014 15:24, Manu wrote:
 On 16 March 2014 21:28, Rainer Schuetze <r.sagitario gmx.de
 <mailto:r.sagitario gmx.de>> wrote:


     alias export extern(Windows) void function() fnWINAPI;

      functionAttributesOf!fnWINAPI HANDLE GetCurrentProcess();


 I frequently find myself needing something like this. What's wrong with
 aliasing attributes directly?
 DGC/LDC offer their own internal attributes, but you can't make use of
 them and remain portable without an ability to do something like the
 #define hack in C.
Unfortunately, it doesn't fit very well with the grammar to allow something like alias property const final nothrow safe pure propertyGet; (or some special syntax) and then parse propertyGet UserType fun(); because it's ambiguous whithout semantic knowlegde of the identifiers. It becomes unambiguous with UDA syntax, though: propertyGet UserType fun(); I suspect propertyGet would have to describe some new "entity" that needs to be able to be passed around, aliased, used in CTFE, etc.
Mar 16 2014
prev sibling parent reply Marco Leise <Marco.Leise gmx.de> writes:
Am Sun, 16 Mar 2014 12:28:24 +0100
schrieb Rainer Schuetze <r.sagitario gmx.de>:

Are we still in the same discussion?
The only thing I miss is that among the several ways to
express function signatures in D, some don't allow you to
specify all attributes. My memory is blurry, but I think it
was function literals that I used to write stubs for runtime
loaded library functionality.

 [=E2=80=A6] though I would prefer avoiding string mixins, maybe by provid=
ing a=20
 function type as prototype:
=20
 alias export extern(Windows) void function() fnWINAPI;
=20
  functionAttributesOf!fnWINAPI HANDLE GetCurrentProcess();
That is too pragmatic for my taste. Something that you define in code should be usable as is. It is like taking the picture of a red corner sofa just to describe the color to someone. In that specific case, why does this not work for you?: nothrow extern(Windows) { HANDLE GetCurrentProcess(); } --=20 Marco
Mar 16 2014
parent reply Rainer Schuetze <r.sagitario gmx.de> writes:
On 17.03.2014 04:45, Marco Leise wrote:
 Am Sun, 16 Mar 2014 12:28:24 +0100
 schrieb Rainer Schuetze <r.sagitario gmx.de>:

 Are we still in the same discussion?
I guess we are drifting off. I was just considering some alternatives to "final(false)" which doesn't work.
 The only thing I miss is that among the several ways to
 express function signatures in D, some don't allow you to
 specify all attributes. My memory is blurry, but I think it
 was function literals that I used to write stubs for runtime
 loaded library functionality.

 […] though I would prefer avoiding string mixins, maybe by providing a
 function type as prototype:

 alias export extern(Windows) void function() fnWINAPI;

  functionAttributesOf!fnWINAPI HANDLE GetCurrentProcess();
That is too pragmatic for my taste. Something that you define in code should be usable as is. It is like taking the picture of a red corner sofa just to describe the color to someone. In that specific case, why does this not work for you?: nothrow extern(Windows) { HANDLE GetCurrentProcess(); }
The attributes sometimes need to be selected conditionally, e.g. when building a library for static or dynamic linkage (at least on windows where not everything is exported by default). Right now, you don't have an alternative to code duplication or heavy use of string mixins.
Mar 17 2014
parent reply Marco Leise <Marco.Leise gmx.de> writes:
Am Mon, 17 Mar 2014 20:10:31 +0100
schrieb Rainer Schuetze <r.sagitario gmx.de>:

 In that specific case, why does this not work for you?:

 nothrow extern(Windows) {
    HANDLE GetCurrentProcess();
 }
The attributes sometimes need to be selected conditionally, e.g. when building a library for static or dynamic linkage (at least on windows where not everything is exported by default). Right now, you don't have an alternative to code duplication or heavy use of string mixins.
Can we write this? It just came to my mind: enum attribs = "nothrow extern(C):"; { mixin(attribs); HANDLE GetCurrentProcess(); } -- Marco
Mar 17 2014
parent Rainer Schuetze <r.sagitario gmx.de> writes:
On 18.03.2014 02:15, Marco Leise wrote:
 Am Mon, 17 Mar 2014 20:10:31 +0100
 schrieb Rainer Schuetze <r.sagitario gmx.de>:

 In that specific case, why does this not work for you?:

 nothrow extern(Windows) {
     HANDLE GetCurrentProcess();
 }
The attributes sometimes need to be selected conditionally, e.g. when building a library for static or dynamic linkage (at least on windows where not everything is exported by default). Right now, you don't have an alternative to code duplication or heavy use of string mixins.
Can we write this? It just came to my mind: enum attribs = "nothrow extern(C):"; { mixin(attribs); HANDLE GetCurrentProcess(); }
Interesting idea, though it doesn't seem to work: enum attribs = "nothrow extern(C):"; extern(D) { // some dummy attribute to make it parsable mixin(attribs); int GetCurrentProcess(); } int main() nothrow // Error: function 'D main' is nothrow yet may throw { return GetCurrentProcess(); // Error: 'attr.GetCurrentProcess' is not nothrow } I guess this is by design, the mixin introduces declarations after the parser has already attached attributes to the non-mixin declarations.
Mar 18 2014
prev sibling parent "Jonathan M Davis" <jmdavisProg gmx.com> writes:
On Friday, March 14, 2014 08:17:08 Andrei Alexandrescu wrote:
 On 3/14/14, 4:37 AM, Daniel Murphy wrote:
 "Walter Bright" wrote in message news:lfu74a$8cr$1 digitalmars.com...
 
 No, it doesn't, because it is not usable if C introduces any virtual
 methods.
That's what the !final storage class is for.
My mistake, I forgot you'd said you were in favor of this. Being able to 'escape' final certainly gets us most of the way there. !final is really rather hideous though.
A few possibilities discussed around here: !final ~final final(false) disable final I've had an epiphany literally a few seconds ago that "final(false)" has the advantage of being generalizable to "final(bool)" taking any CTFE-able Boolean. On occasion I needed a computed qualifier (I think there's code in Phobos like that) and the only way I could do it was through ugly code duplication or odd mixin-generated code. Allowing computed qualifiers/attributes would be a very elegant and general approach, and plays beautifully into the strength of D and our current investment in Boolean compile-time predicates.
That sounds like a good approach and could definitely reduce the number of static ifs in some generic code (though as Daniel points out, I'm not sure how common that really is). - Jonathan M Davis
Mar 14 2014
prev sibling next sibling parent reply Manu <turkeyman gmail.com> writes:
On 14 March 2014 07:42, Walter Bright <newshound2 digitalmars.com> wrote:

 On 3/13/2014 1:09 PM, Andrei Alexandrescu wrote:

 Also let's not forget that a bunch of people will have not had contact
 with the
 group and will not have read the respective thread. For them -- happy
 campers
 who get work done in D day in and day out, feeling no speed impact
 whatsoever
 from a virtual vs. final decision -- we are simply exercising the brunt
 of a
 deprecation cycle with undeniable costs and questionable (in Walter's and
 my
 opinion) benefits.
Also, class C { final: ... } achieves final-by-default and it breaks nothing.
It does nothing to prevent the library case, or the don't-early-optimise case of implying breaking changes in the future. Please leave the virtual keyword as commit by Daniel Murphy in there. 'final:' is no use without a way to undo it for the few instances that should be virtual. It also really helps C++ portability.
Mar 13 2014
next sibling parent Walter Bright <newshound2 digitalmars.com> writes:
On 3/13/2014 8:56 PM, Manu wrote:
 Please leave the virtual keyword as commit by Daniel Murphy in there. 'final:'
 is no use without a way to undo it for the few instances that should be
virtual.
As I stated in the opening in this thread, I understand and agree with the need for a 'virtual' or '!final'. That is not at issue.
Mar 13 2014
prev sibling parent Walter Bright <newshound2 digitalmars.com> writes:
On 3/13/2014 8:56 PM, Manu wrote:
 It does nothing to prevent the library case,
I addressed that in another post here, i.e. I'm very skeptical of the notion that a large code base built with no attempt at writing fast code is going to get fast with merely this change. I don't buy that high performance code comes by accident. If you wish to debate this, though, please reply to my other post on the topic, which expounds with more detail.
 or the don't-early-optimise case of implying breaking changes in the future.
Optimizing code often involves breaking the interface. For example, the existence or absence of a destructor can have dramatic differences in the code generated. (Even very experienced C++ coders are often unaware of this.)
Mar 13 2014
prev sibling parent reply "Regan Heath" <regan netmail.co.nz> writes:
On Thu, 13 Mar 2014 21:42:43 -0000, Walter Bright  
<newshound2 digitalmars.com> wrote:

 On 3/13/2014 1:09 PM, Andrei Alexandrescu wrote:
 Also let's not forget that a bunch of people will have not had contact  
 with the
 group and will not have read the respective thread. For them -- happy  
 campers
 who get work done in D day in and day out, feeling no speed impact  
 whatsoever
 from a virtual vs. final decision -- we are simply exercising the brunt  
 of a
 deprecation cycle with undeniable costs and questionable (in Walter's  
 and my
 opinion) benefits.
Also, class C { final: ... } achieves final-by-default and it breaks nothing.
Yes.. but doesn't help Manu or any other consumer concerned with speed if the library producer neglected to do this. This is the real issue, right? Not whether class *can* be made final (trivial), but whether they *actually will* *correctly* be marked final/virtual where they ought to be. Library producers range in experience and expertise and are "only human" so we want the option which makes it more likely they will produce good code. In addition we want the option which means that if they get it wrong, less will break if/when they want to correct it. Final by default requires that you (the library producer) mark as virtual the functions you intend to be inherited from. Lets assume the library producer has a test case where s/he does just this, inherits from his/her classes and overrides methods as they see consumers doing. The compiler will detect any methods not correctly marked. So, there is a decent chance that producers will get this "right" w/ final by default. If they do get it wrong, making the change from final -> virtual does not break any consumer code. Compare that to virtual by default where marking everything virtual means it will always work, but there is a subtle and unlikely to be detected/tested performance penalty. There is no compiler support for detecting this, and no compiler support for correctly identifying the methods which should be marked final. In fact, you would probably mark them all final and then mark individual functions virtual in order to solve this. If they get it wrong, making the change from virtual -> final is more likely to break consumer code. I realise you're already aware of the arguments for final by default, and convinced it would have been the best option, but it also seems to me that the "damage" that virtual by default will cause over the future lifetime of D, is greater than a well controlled deprecation path from virtual -> final would be. Even without a specific tool to aid deprecation, the compiler will output clear errors for methods which need to be marked virtual, granted this requires you compile a program which "uses" the library but most library producers should have such a test case already, and their consumers could help out a lot by submitting those errors directly. Regan. -- Using Opera's revolutionary email client: http://www.opera.com/mail/
Mar 14 2014
parent reply Nick Sabalausky <SeeWebsiteToContactMe semitwist.com> writes:
On 3/14/2014 6:20 AM, Regan Heath wrote:
 Yes.. but doesn't help Manu or any other consumer concerned with speed
 if the library producer neglected to do this.  This is the real issue,
 right?  Not whether class *can* be made final (trivial), but whether
 they *actually will* *correctly* be marked final/virtual where they
 ought to be.

 Library producers range in experience and expertise and are "only human"
 so we want the option which makes it more likely they will produce good
 code.  In addition we want the option which means that if they get it
 wrong, less will break if/when they want to correct it.
While I personally would have been perfectly ok with changing to final-by-default (I'm fine either way), I can't help wondering: Is it really that big of a deal to sprinkle some "final"s into the occasional third party library if you really need to?
Mar 17 2014
parent reply "deadalnix" <deadalnix gmail.com> writes:
On Tuesday, 18 March 2014 at 01:25:34 UTC, Nick Sabalausky wrote:
 While I personally would have been perfectly ok with changing 
 to final-by-default (I'm fine either way), I can't help 
 wondering: Is it really that big of a deal to sprinkle some 
 "final"s into the occasional third party library if you really 
 need to?
It makes that much noise because this is a problem that everybody understand. Much bigger problem do not receive any attention.
Mar 17 2014
parent "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Tue, Mar 18, 2014 at 02:31:02AM +0000, deadalnix wrote:
 On Tuesday, 18 March 2014 at 01:25:34 UTC, Nick Sabalausky wrote:
While I personally would have been perfectly ok with changing to
final-by-default (I'm fine either way), I can't help wondering: Is
it really that big of a deal to sprinkle some "final"s into the
occasional third party library if you really need to?
It makes that much noise because this is a problem that everybody understand. Much bigger problem do not receive any attention.
http://en.wikipedia.org/wiki/Parkinson's_law_of_triviality :-) T -- They say that "guns don't kill people, people kill people." Well I think the gun helps. If you just stood there and yelled BANG, I don't think you'd kill too many people. -- Eddie Izzard, Dressed to Kill
Mar 17 2014
prev sibling parent reply Manu <turkeyman gmail.com> writes:
On 14 March 2014 06:09, Andrei Alexandrescu
<SeeWebsiteForEmail erdani.org>wrote:

 On 3/13/14, 11:37 AM, Paolo Invernizzi wrote:

 As I've stated, it is not about the single decision, I don't care about
 final vs virtual in our code. it's about the whole way that "planned
 improvement" changes to the language are managed.
Got it, thanks. What I was meaning is: why the past mega-thread about virtual vs final
 (that I don't care about!) that seemed (to me!) that placed a concrete
 direction goal was (to me!) scraped like a thunder in clean sky.

 Where's the discussion why "it turned out to be not enough"?

 What scares me (as a company using the language) was that I wasn't able
 to "grasp" that fact in forum till now.

 So, that could also happen to *other* aspect of the language that a care
 for my business, without even having the ability do discuss about the
 motivation of a decision.

  There must be a way to convey that a decision has been made. It is
 understood it won't please everybody, just like going the other way
 won't please everybody. Please let me know what that way is.
Again, the whole point was that it seemed to me that a decision was taken in that famous thread. My feedback, take it as you want Andrei, it is that such behaviours are a way more scaring that the hole point of managing a "planned" (again!) language change.
Understood. That's one angle. The other angle is that a small but vocal faction can intimidate the language leadership to effect a large breaking change that it doesn't believe in.
I feel like this was aimed at me, and I also feel it's unfair. If you recall back to the first threads on the topic, I was the absolute minority, almost a lone voice. Practically nobody agreed, infact, there was quite aggressive objection across the board, until much discussion about it has passed. I was amazed to see in this thread how many have changed their minds from past discussions. Infact, my impression from this thread is that the change now has almost unanimous support, and by my recollection, many(/most?) of those people were initially against. To say this is a small vocal faction is unfair (unless you mean me personally?). A whole bunch of people who were originally against, but were convinced by argument and evidence is not a 'faction' with an agenda to intimidate their will upon leadership. I suspect what seems strange to the participants in this thread, that despite what eventually appears to have concluded in almost unanimous agreement (especially surprising considering the starting point years back!), is the abrupt refusal. That's Walter's prerogative I guess... if he feels that strongly about it, then I'm not going to force the issue any more. I am surprised though, considering the level of support for the change expressed in this thread, which came as a surprise to me; it's the highest it's ever been... much greater than in prior discussions on the topic. You always say forum participation is not a fair representation of the community, but when the forum representation is near unanimous, you have to begin to be able to make some assumptions about the wider communities opinion. Back to pushing the ARC wagon for me... Also let's not forget that a bunch of people will have not had contact with
 the group and will not have read the respective thread. For them -- happy
 campers who get work done in D day in and day out, feeling no speed impact
 whatsoever from a virtual vs. final decision -- we are simply exercising
 the brunt of a deprecation cycle with undeniable costs and questionable (in
 Walter's and my opinion) benefits.
Mar 13 2014
parent reply Marco Leise <Marco.Leise gmx.de> writes:
Am Fri, 14 Mar 2014 13:48:40 +1000
schrieb Manu <turkeyman gmail.com>:

 I feel like this was aimed at me, and I also feel it's unfair.
 
 If you recall back to the first threads on the topic, I was the absolute
 minority, almost a lone voice. Practically nobody agreed, infact, there was
 quite aggressive objection across the board, until much discussion about it
 has passed.
 I was amazed to see in this thread how many have changed their minds from
 past discussions. Infact, my impression from this thread is that the change
 now has almost unanimous support, and by my recollection, many(/most?) of
 those people were initially against.
 
 To say this is a small vocal faction is unfair (unless you mean me
 personally?). A whole bunch of people who were originally against, but were
 convinced by argument and evidence is not a 'faction' with an agenda to
 intimidate their will upon leadership.
 I suspect what seems strange to the participants in this thread, that
 despite what eventually appears to have concluded in almost unanimous
 agreement (especially surprising considering the starting point years
 back!), is the abrupt refusal.
 That's Walter's prerogative I guess... if he feels that strongly about it,
 then I'm not going to force the issue any more.
 
 I am surprised though, considering the level of support for the change
 expressed in this thread, which came as a surprise to me; it's the highest
 it's ever been... much greater than in prior discussions on the topic.
 You always say forum participation is not a fair representation of the
 community, but when the forum representation is near unanimous, you have to
 begin to be able to make some assumptions about the wider communities
 opinion.
Me too, I got the impression, that once the library authoring issue was on the table, suddenly everyone could relate to final-by-default and the majority of the forum community found it to be a reasonable change. For once in a decade it seemed that one of the endless discussions reached a consensus and a plan of action: issue a warning, then deprecate. I was seriously relieved to see an indication of a working decision making process initiated by the forum community. After all digitalmars.D is for discussing the language. Then this process comes to a sudden halt, because Walter gets negative feedback about some unrelated braking change and Andrei considers final-by-default good, but too much of a breaking change for what it's worth. Period. After such a long community driven discussion about it. One message that this sends out is that a proposal, even with almost complete lack of opposition, an in-depth discussion, long term benefits and being in line with the language's goals can be turned down right when it is ready to be merged. The other message is that the community as per this forum is not representative of the target audience, so our decisions may not be in the best interest of D. Surprisingly though, most commercial adapters that ARE here except for one, have no problem with this announced language change for the better. I neither see a small vocal faction intimidating (wow!) the leadership, nor do I see a dictate of the majority. At least 2 people mentioned different reasons for final-by-default that convinced most of us that it positively changes D. ...without threats like "we won't use D any more if you don't agree". Paying customers including Facebook can have influence on what is worked on, but D has become a community effort and freezing the language for the sake of creating a stable target for them while core language features are still to be finalized (i.e. shared, allocation) is not convincing. -- Marco
Mar 15 2014
parent "Francesco Cattoglio" <francesco.cattoglio gmail.com> writes:
I don't think that the virtual-by-default is the most important 
aspect of the language, so I can live with it (even if I strongly 
dislike it). What actually scares me is this:

On Saturday, 15 March 2014 at 11:59:41 UTC, Marco Leise wrote:

 One message that this sends out is that a proposal, even with
 almost complete lack of opposition, an in-depth discussion,
 long term benefits and being in line with the language's goals
 can be turned down right when it is ready to be merged.
A decision was taken, being backed by several people, work had begun (yebbiles pulls) and now the proposal gets reverted out of the blue. Somehow makes me wonder how the future of D is decided upon. To me, it really feels like it's made by last-second decisions. I think it can really make a bad impression to newcomers.
 I neither see a small vocal faction intimidating (wow!) the
 leadership, nor do I see a dictate of the majority.
Agree. But I think that the "vocal faction intimidating" was just a horrible choice of words, with no harmful intent. Just like the "But we nearly lost a major client over it." used at the begin of the thread.
Mar 15 2014
prev sibling parent "ed" <growlercab gmail.com> writes:
On Thursday, 13 March 2014 at 18:37:36 UTC, Paolo Invernizzi 
wrote:
 On Thursday, 13 March 2014 at 17:56:09 UTC, Andrei Alexandrescu 
 wrote:
 On 3/13/14, 10:21 AM, Paolo Invernizzi wrote:

 Told that, I'm following the forum as this is by far the best 
 way to
 reinforce of undermine my past decision (and sleep well at 
 night!)
 That why I think that, IMHO, companies that adopted D 
 "seriously" are
 present here, and are lurking.
I don't think so. This isn't the case for my employer, and hasn't been the case historically for a lot of the companies using other languages. I have plenty of experience with forum dynamics to draw from.
So we disagree on that, and that's fine to me, but this don't change the fact your presence here turns your employer Facebook well represented in the forum now that it has something committed with D, IMHO...
 Just to give a perspective, we are not so big like 
 Sociomantic but we
 are making some $M, so for us the decision was not a joke.
Again, it's unlikely the decision would have been in other way affected by a minor language design detail. The matter is you seem convinced final would improve your use of D, and therefore are unhappy with the decision. For those who aren't, we'd seem flaky by breaking their code.
As I've stated, it is not about the single decision, I don't care about final vs virtual in our code. it's about the whole way that "planned improvement" changes to the language are managed.
 And to be honest what it's really scaring it's not the 
 frequency of the
 "planned improvement" of the language, but that a feeling 
 turned  "a
 solid piece of evidence" [1] into smoke. Today is virtual, 
 tomorrow how
 knows?

 [1]
 http://forum.dlang.org/thread/yzsqwejxqlnzryhrkfuq forum.dlang.org?page=23#post-koo65g:241nqs:242:40digitalmars.com
Now I think you're being unfair. Yes, it was a good piece of evidence. And yes, it turned out to be not enough. It's that simple and that visible. What, are Walter and me doing cover-up machinations now???
I'm not a native english speakers, but It doesn't seems to me that the meaning of what I wrote was that D is driven by a machinations. What I was meaning is: why the past mega-thread about virtual vs final (that I don't care about!) that seemed (to me!) that placed a concrete direction goal was (to me!) scraped like a thunder in clean sky. Where's the discussion why "it turned out to be not enough"? What scares me (as a company using the language) was that I wasn't able to "grasp" that fact in forum till now. So, that could also happen to *other* aspect of the language that a care for my business, without even having the ability do discuss about the motivation of a decision.
 There must be a way to convey that a decision has been made. 
 It is understood it won't please everybody, just like going 
 the other way won't please everybody. Please let me know what 
 that way is.
Again, the whole point was that it seemed to me that a decision was taken in that famous thread. My feedback, take it as you want Andrei, it is that such behaviours are a way more scaring that the hole point of managing a "planned" (again!) language change. Thanks, - Paolo
This is still at DRAFT status: http://wiki.dlang.org/DIP51 If this was in the "Accepted" state I would agree with your concerns. I'd like to see DIPs and the Phobos Review Queue extended to cover all language changes and improvements, ignoring bug fixes. It might be the case now, but there is no way to tell ... at least as a D user. The 2065 Change Log has 10 language changes, 1 compiler change and 4 library changes with no reference to a DIP or Phobos Review. The Change Log does reference DIP37 twice, but they are both bug fixes so it doesn't count :) There should be a DIP for Walter's proposal in this thread, even if the decision has already been made. Also DIP51 status should be changed to "Rejected" with an paragraph explaining why it was rejected, and possibly a link back to the forum for the gory discussion details. Cheers, ed
Mar 13 2014
prev sibling next sibling parent reply "Daniele Vian" <dontreply example.com> writes:
On Thursday, 13 March 2014 at 08:16:50 UTC, Andrej Mitrovic wrote:
 On 3/13/14, Walter Bright <newshound2 digitalmars.com> wrote:
 I didn't even know about this client before the breakage.
I'm really getting tired of this argument. An unknown client (which you still haven't named, so as far as I'm concerned it might as well be just a reddit troll) comes out of the blue, complains about some small breakage which can *easily* be fixed in a point release, and suddenly that has to affect the decision on final by default. Also, the client hasn't bothered to file a bug report, and 2.056 has been released for a few weeks (nevermind the massively long beta cycle).
Hi guys, I happen to be one of "the clients" in case. We have several tools and apps written in D, in production, and we're actively trying to convert other portions of our systems too. We're an italian company managing the technical background of several startups (and "grown-ups") ranging from simple editorial/publishing business to realtime restaurant reservations and so on. I think you can do every changes you need / you want but you should allow some time for people to update to new versions. No problems if you want to set a function as "deprecated" (or even a whole library) but please don't break compatibility all of a sudden (and without warning even in the changelog). A breaking change drives to different problems: - If you have code in production, code switching is a bit painful because you have to change code all at once. A short-period transition supported by "deprecated methods" makes it all easier to do. - If your deployment system is automated (code updating and compiling) it's a bit difficult to sync the upgrade of compiler and code. If you upgrade your compiler first, code won't compile. If you upgrade your code first, it won't compile too. So in that (hopefully short) range of time your deployment system is not working. - Changes could affect also third party libraries you're using in production, and that's even worse. You have to wait and hope they will fix that quickly. - You can't always do a search/replace to fix the problems, and it could take a long time to fix all the involved code. A breaking change could also lead to a change of code structure. And probably it's a good idea to do all the needed tests after this, in order to check if the app is working as expected. In the meanwhile you can't use the new compiler with potential security/bug/performance fixes: you're stuck on the old version. Thanks, Daniele
Mar 13 2014
parent Andrej Mitrovic <andrej.mitrovich gmail.com> writes:
On 3/13/14, Daniele Vian <dontreply example.com> wrote:
 I think you can do every changes you need / you want but you
 should allow some time for people to update to new versions. No
 problems if you want to set a function as "deprecated" (or even a
 whole library) but please don't break compatibility all of a
 sudden (and without warning even in the changelog).
As was said before this wasn't a planned API break, it was a regression. It will be fixed in the next point release, the fix for this has already been merged to the master branch. But why didn't you simply file a bug report instead of report to Walter?
Mar 13 2014
prev sibling parent Walter Bright <newshound2 digitalmars.com> writes:
On 3/13/2014 1:16 AM, Andrej Mitrovic wrote:
 On 3/13/14, Walter Bright <newshound2 digitalmars.com> wrote:
 I didn't even know about this client before the breakage.
I'm really getting tired of this argument. An unknown client (which you still haven't named,
Companies won't trust me if whatever they say about their private business I blather over the internet. My default behavior is, unless they give me explicit permission, to treat my interactions with them as confidential.
 so as far as I'm concerned it might as well
 be just a reddit troll) comes out of the blue,
I find this accusation quite unfair. You either trust me to work in the best interests of D, or you don't. I've been quite open in explaining the reasoning going on. Naming isn't going to add anything.
 I mean the whole idea of client X deciding to ring up Andrei or Walter, NDAs 
to not disclose their name, and make an executive decision on some language/phobos feature. Anyone can (and has) sent me emails about D which they wish to be confidential, and I treat them as such, and will continue to do so. People who want to express concerns privately may do so, and those who want to express them publicly can do so (right here) as well. Please also consider that the proposal for final-by-default comes from Manu, formerly of Remedy Games. Recall that I implemented UDAs ostensibly for Manu & Remedy, but also because I thought it was a great feature for D. But this one I am not so convinced is great for D. The takeaway is I am certainly not doing things just because some client asked for it and to hell with the rest of the community - the evidence contraindicates that.
Mar 13 2014
prev sibling parent reply "Don" <x nospam.com> writes:
On Thursday, 13 March 2014 at 06:02:27 UTC, Walter Bright wrote:
 On 3/12/2014 9:23 PM, Manu wrote:
 It's not minor, and it's not achievable by other means though.
class C { final: ... } does it.
 You and Andrei are the only resistance in this thread so far. 
 Why don't you ask
 'temperamental client' what their opinion is? Give them a 
 heads up, perhaps
 they'll be more reasonable than you anticipate?
I didn't even know about this client before the breakage. D has a lot of users who we don't know about.
 Both myself and Don have stated on behalf of industrial 
 clients that we embrace
 breaking changes that move the language forward, or correct 
 clearly identifiable
 mistakes.
Breaking changes has been a huge barrier to Don's company being able to move from D1 to D2. I still support D1 specifically for Don's company.
Yes, but the problem is not the changes which cause compile errors and force you to change your code in obvious ways. The problem is subtle changes to behaviour. The worst breaking change in D2, by far, is the prevention of array stomping. After that change, our code still runs, and produces exactly the same results, but it is so slow that it's completely unusable. This one of the main reasons we're still using D1.
Mar 13 2014
next sibling parent reply "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Thu, 13 Mar 2014 04:43:51 -0400, Don <x nospam.com> wrote:

 On Thursday, 13 March 2014 at 06:02:27 UTC, Walter Bright wrote:
 On 3/12/2014 9:23 PM, Manu wrote:
 It's not minor, and it's not achievable by other means though.
class C { final: ... } does it.
 You and Andrei are the only resistance in this thread so far. Why  
 don't you ask
 'temperamental client' what their opinion is? Give them a heads up,  
 perhaps
 they'll be more reasonable than you anticipate?
I didn't even know about this client before the breakage. D has a lot of users who we don't know about.
 Both myself and Don have stated on behalf of industrial clients that  
 we embrace
 breaking changes that move the language forward, or correct clearly  
 identifiable
 mistakes.
Breaking changes has been a huge barrier to Don's company being able to move from D1 to D2. I still support D1 specifically for Don's company.
Yes, but the problem is not the changes which cause compile errors and force you to change your code in obvious ways. The problem is subtle changes to behaviour. The worst breaking change in D2, by far, is the prevention of array stomping. After that change, our code still runs, and produces exactly the same results, but it is so slow that it's completely unusable. This one of the main reasons we're still using D1.
What is your use case(s), might I ask? Prevention of array stomping, I thought, had a net positive effect on performance, because it no longer has to lock the GC for thread-local appends. -Steve
Mar 13 2014
parent reply "Daniel Murphy" <yebbliesnospam gmail.com> writes:
"Steven Schveighoffer"  wrote in message 
news:op.xcnu55j2eav7ka stevens-macbook-pro.local...

 The worst breaking change in D2, by far, is the prevention of array 
 stomping.
What is your use case(s), might I ask? Prevention of array stomping, I thought, had a net positive effect on performance, because it no longer has to lock the GC for thread-local appends.
I would guess they're setting length to zero and appending to re-use the memory.
Mar 13 2014
next sibling parent reply "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Thu, 13 Mar 2014 09:17:06 -0400, Daniel Murphy  
<yebbliesnospam gmail.com> wrote:

 "Steven Schveighoffer"  wrote in message  
 news:op.xcnu55j2eav7ka stevens-macbook-pro.local...

 The worst breaking change in D2, by far, is the prevention of array >  
stomping. What is your use case(s), might I ask? Prevention of array stomping, I thought, had a net positive effect on performance, because it no longer has to lock the GC for thread-local appends.
I would guess they're setting length to zero and appending to re-use the memory.
Are they using assumeSafeAppend? If not, totally understand. If they are, then I want to fix whatever is wrong. -Steve
Mar 13 2014
parent reply "Daniel Murphy" <yebbliesnospam gmail.com> writes:
"Steven Schveighoffer"  wrote in message 
news:op.xcnxdfikeav7ka stevens-macbook-pro.local...

 Are they using assumeSafeAppend? If not, totally understand. If they are, 
 then I want to fix whatever is wrong.
Well no, it's D1 code.
Mar 13 2014
parent "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Thu, 13 Mar 2014 09:54:55 -0400, Daniel Murphy  
<yebbliesnospam gmail.com> wrote:

 "Steven Schveighoffer"  wrote in message  
 news:op.xcnxdfikeav7ka stevens-macbook-pro.local...

 Are they using assumeSafeAppend? If not, totally understand. If they  
 are, then I want to fix whatever is wrong.
Well no, it's D1 code.
Right. I wasn't sure if an attempt was made to convert and they considered that option usable or not. Considering the mechanism that is used to "reset" buffers, I would expect it to be a difficult process to update to use assumeSafeAppend. But I would expect the performance to actually increase once that is done. -Steve
Mar 13 2014
prev sibling parent reply "Dicebot" <public dicebot.lv> writes:
On Thursday, 13 March 2014 at 13:16:54 UTC, Daniel Murphy wrote:
 "Steven Schveighoffer"  wrote in message 
 news:op.xcnu55j2eav7ka stevens-macbook-pro.local...

 The worst breaking change in D2, by far, is the prevention 
 of array stomping.
What is your use case(s), might I ask? Prevention of array stomping, I thought, had a net positive effect on performance, because it no longer has to lock the GC for thread-local appends.
I would guess they're setting length to zero and appending to re-use the memory.
Exactly. So far looks like upon transition to D2 almost all arrays used in our code will need to be replaced with some variation of Appender!T
Mar 13 2014
next sibling parent reply "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Thu, 13 Mar 2014 09:37:51 -0400, Dicebot <public dicebot.lv> wrote:

 On Thursday, 13 March 2014 at 13:16:54 UTC, Daniel Murphy wrote:
 "Steven Schveighoffer"  wrote in message  
 news:op.xcnu55j2eav7ka stevens-macbook-pro.local...

 The worst breaking change in D2, by far, is the prevention > of  
array stomping. What is your use case(s), might I ask? Prevention of array stomping, I thought, had a net positive effect on performance, because it no longer has to lock the GC for thread-local appends.
I would guess they're setting length to zero and appending to re-use the memory.
Exactly. So far looks like upon transition to D2 almost all arrays used in our code will need to be replaced with some variation of Appender!T
I think you might find that it will run considerably faster in that case. In the old mechanism of D1, the GC lock was used on every append, and if you had multiple threads appending simultaneously, they were contending with the single element cache to look up block info. Appender only needs to look up GC block info when it needs more memory from the GC. I would also mention that a "band-aid" fix, if you are always using x.length = 0, is to special case that in the runtime to automatically reset the used size to 0 as well. This is a specialized application, I would think tweaking the runtime is a possibility, and a temporary fix like this until you can update your code would at least provide an intermediate solution. -Steve
Mar 13 2014
next sibling parent reply "Daniel Murphy" <yebbliesnospam gmail.com> writes:
"Steven Schveighoffer"  wrote in message 
news:op.xcnxwzc8eav7ka stevens-macbook-pro.local...

 I would also mention that a "band-aid" fix, if you are always using 
 x.length = 0, is to special case that in the runtime to automatically 
 reset the used size to 0 as well. This is a specialized application, I 
 would think tweaking the runtime is a possibility, and a temporary fix 
 like this until you can update your code would at least provide an 
 intermediate solution.
That would risk breaking code in druntime/phobos.
Mar 13 2014
parent reply "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Thu, 13 Mar 2014 09:59:41 -0400, Daniel Murphy  
<yebbliesnospam gmail.com> wrote:

 "Steven Schveighoffer"  wrote in message  
 news:op.xcnxwzc8eav7ka stevens-macbook-pro.local...

 I would also mention that a "band-aid" fix, if you are always using  
 x.length = 0, is to special case that in the runtime to automatically  
 reset the used size to 0 as well. This is a specialized application, I  
 would think tweaking the runtime is a possibility, and a temporary fix  
 like this until you can update your code would at least provide an  
 intermediate solution.
That would risk breaking code in druntime/phobos.
I think it would be highly unlikely. Consider the use case: arr contains some number of elements; someOtherArr = arr; arr.length = 0; ... arr ~= ...; use someOtherArr, which now has stomped data. Now, if the code means to reuse the buffer, it will call assumeSafeAppend, which in this case would be a noop, and things will continue to work as expected. If it just relies on the runtime re-allocating arr on the next append, why not just do arr = null? If the code doesn't have someOtherArr, then it will still work just fine. I can't see a reason to reset the length to 0 if you aren't going to reuse the array bytes, instead of setting it to null. If the code doesn't always reset to length 0, and then tries appending, then Sociomantic's code will still be poorly performing. But I'm assuming it's always length = 0. I would ONLY trigger on that condition (and the array has to be starting at the beginning of the block, I should have mentioned that). I think it would work without issue. In fact, I wonder if that shouldn't be a "feature" of the array runtime anyway. -Steve
Mar 13 2014
parent reply "Daniel Murphy" <yebbliesnospam gmail.com> writes:
"Steven Schveighoffer"  wrote in message 
news:op.xcnyu21leav7ka stevens-macbook-pro.local...

 I think it would work without issue.
Sure, probably.
 In fact, I wonder if that shouldn't be a "feature" of the array runtime 
 anyway.
It breaks the type system when you use it with immutable.
Mar 13 2014
parent "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Thu, 13 Mar 2014 10:29:48 -0400, Daniel Murphy  
<yebbliesnospam gmail.com> wrote:

 "Steven Schveighoffer"  wrote in message  
 news:op.xcnyu21leav7ka stevens-macbook-pro.local...

 I think it would work without issue.
Sure, probably.
 In fact, I wonder if that shouldn't be a "feature" of the array runtime  
 anyway.
It breaks the type system when you use it with immutable.
This is true. We could restrict it only to mutable arrays (I get the type in via its typeinfo). -Steve
Mar 13 2014
prev sibling parent reply "Don" <x nospam.com> writes:
On Thursday, 13 March 2014 at 13:47:13 UTC, Steven Schveighoffer 
wrote:
 On Thu, 13 Mar 2014 09:37:51 -0400, Dicebot <public dicebot.lv> 
 wrote:

 On Thursday, 13 March 2014 at 13:16:54 UTC, Daniel Murphy 
 wrote:
 "Steven Schveighoffer"  wrote in message 
 news:op.xcnu55j2eav7ka stevens-macbook-pro.local...

 The worst breaking change in D2, by far, is the prevention
 of
array stomping. What is your use case(s), might I ask? Prevention of array stomping, I thought, had a net positive effect on performance, because it no longer has to lock the GC for thread-local appends.
I would guess they're setting length to zero and appending to re-use the memory.
Exactly. So far looks like upon transition to D2 almost all arrays used in our code will need to be replaced with some variation of Appender!T
I think you might find that it will run considerably faster in that case. In the old mechanism of D1, the GC lock was used on every append, and if you had multiple threads appending simultaneously, they were contending with the single element cache to look up block info. Appender only needs to look up GC block info when it needs more memory from the GC.
We don't use threads.
Mar 14 2014
parent "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Fri, 14 Mar 2014 05:35:03 -0400, Don <x nospam.com> wrote:

 On Thursday, 13 March 2014 at 13:47:13 UTC, Steven Schveighoffer wrote:
 On Thu, 13 Mar 2014 09:37:51 -0400, Dicebot <public dicebot.lv> wrote:

 On Thursday, 13 March 2014 at 13:16:54 UTC, Daniel Murphy wrote:
 "Steven Schveighoffer"  wrote in message  
 news:op.xcnu55j2eav7ka stevens-macbook-pro.local...

 The worst breaking change in D2, by far, is the prevention
 of
array stomping. What is your use case(s), might I ask? Prevention of array stomping, I thought, had a net positive effect on performance, because it no longer has to lock the GC for thread-local appends.
I would guess they're setting length to zero and appending to re-use the memory.
Exactly. So far looks like upon transition to D2 almost all arrays used in our code will need to be replaced with some variation of Appender!T
I think you might find that it will run considerably faster in that case. In the old mechanism of D1, the GC lock was used on every append, and if you had multiple threads appending simultaneously, they were contending with the single element cache to look up block info. Appender only needs to look up GC block info when it needs more memory from the GC.
We don't use threads.
In that case, the GC will not use a lock (I think it avoids allocating a lock until at least 2 threads exist). However, you can still benefit from the expanded cache if you are appending to 2 to 8 arrays simultaneously. The D1 cache was for one block only. -Steve
Mar 14 2014
prev sibling next sibling parent "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Thu, 13 Mar 2014 09:37:51 -0400, Dicebot <public dicebot.lv> wrote:

 On Thursday, 13 March 2014 at 13:16:54 UTC, Daniel Murphy wrote:
 "Steven Schveighoffer"  wrote in message  
 news:op.xcnu55j2eav7ka stevens-macbook-pro.local...

 The worst breaking change in D2, by far, is the prevention > of  
array stomping. What is your use case(s), might I ask? Prevention of array stomping, I thought, had a net positive effect on performance, because it no longer has to lock the GC for thread-local appends.
I would guess they're setting length to zero and appending to re-use the memory.
Exactly. So far looks like upon transition to D2 almost all arrays used in our code will need to be replaced with some variation of Appender!T
Also, if you didn't see my other message, assumeSafeAppend would help here. I would replace all arr.length = 0 to a function call that does arr.length = 0; arr.assumeSafeAppend(); -Steve
Mar 13 2014
prev sibling next sibling parent "Daniel Murphy" <yebbliesnospam gmail.com> writes:
"Dicebot"  wrote in message news:pwgfmyizziqoqhwrbtdf forum.dlang.org...

 Exactly. So far looks like upon transition to D2 almost all arrays used in 
 our code will need to be replaced with some variation of Appender!T
Or stick in assumeSafeAppend everywhere? You can even do this before the transition, with a no-op function.
Mar 13 2014
prev sibling parent "John Colvin" <john.loughran.colvin gmail.com> writes:
On Thursday, 13 March 2014 at 13:37:52 UTC, Dicebot wrote:
 On Thursday, 13 March 2014 at 13:16:54 UTC, Daniel Murphy wrote:
 "Steven Schveighoffer"  wrote in message 
 news:op.xcnu55j2eav7ka stevens-macbook-pro.local...

 The worst breaking change in D2, by far, is the prevention 
 of array stomping.
What is your use case(s), might I ask? Prevention of array stomping, I thought, had a net positive effect on performance, because it no longer has to lock the GC for thread-local appends.
I would guess they're setting length to zero and appending to re-use the memory.
Exactly. So far looks like upon transition to D2 almost all arrays used in our code will need to be replaced with some variation of Appender!T
From what little I have read about sociomantic's codebase (pre-allocated lumps of memory, stomped all over to keep code simple and prevent needing extra allocations, am I right?), I would imagine you would be better off just emulating the old behaviour in a little wrapper around a builtin array.
Mar 13 2014
prev sibling next sibling parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 3/13/14, 1:43 AM, Don wrote:
 The worst breaking change in D2, by far, is the prevention of array
 stomping.

 After that change, our code still runs, and produces exactly the same
 results, but it is so slow that it's completely unusable. This one of
 the main reasons we're still using D1.
Interesting. Are there ways to refactor around this issue without a major redesign? Andrei
Mar 13 2014
prev sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 3/13/2014 1:43 AM, Don wrote:
 The worst breaking change in D2, by far, is the prevention of array stomping.

 After that change, our code still runs, and produces exactly the same results,
 but it is so slow that it's completely unusable. This one of the main reasons
 we're still using D1.
I didn't know this. I'd like more details - perhaps I can help with how to deal with it.
Mar 13 2014
parent reply "Don" <x nospam.com> writes:
On Thursday, 13 March 2014 at 19:28:59 UTC, Walter Bright wrote:
 On 3/13/2014 1:43 AM, Don wrote:
 The worst breaking change in D2, by far, is the prevention of 
 array stomping.

 After that change, our code still runs, and produces exactly 
 the same results,
 but it is so slow that it's completely unusable. This one of 
 the main reasons
 we're still using D1.
I didn't know this. I'd like more details - perhaps I can help with how to deal with it.
Our entire codebase assumes that stomping will happen. Simplest example: T[] dupArray(T)(ref T[] dest, T[] src) { dest.length = src.length; if (src.length) { dest[] = src[]; } return dest; } This is equivalent to dest = src.dup, but if dest was already long enough to contain src, no allocation occurs. Sure, we can add a call to "assumeSafeAppend()" everywhere. And I mean *everywhere*. Every single instance of array creation or concatentation, without exception. Almost every array in our codebase is affected by this.
Mar 14 2014
next sibling parent reply "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Fri, 14 Mar 2014 06:06:41 -0400, Don <x nospam.com> wrote:

 On Thursday, 13 March 2014 at 19:28:59 UTC, Walter Bright wrote:
 On 3/13/2014 1:43 AM, Don wrote:
 The worst breaking change in D2, by far, is the prevention of array  
 stomping.

 After that change, our code still runs, and produces exactly the same  
 results,
 but it is so slow that it's completely unusable. This one of the main  
 reasons
 we're still using D1.
I didn't know this. I'd like more details - perhaps I can help with how to deal with it.
Our entire codebase assumes that stomping will happen. Simplest example: T[] dupArray(T)(ref T[] dest, T[] src) { dest.length = src.length; if (src.length) { dest[] = src[]; } return dest; }
OK, thanks. This is not the same as setting length to 0. You would have to re-introduce stomping completely to fix this without modification, and that will break too much library code. My proposed idea is not going to help here. Fixing this function involves adding assumeSafeAppend: T[] dupArray(T)(ref T[] dest, T[] src) { dest.length = src.length; dest.assumeSafeAppend(); ... // the rest is the same. }
 This is equivalent to dest = src.dup, but if dest was already long  
 enough to contain src, no allocation occurs.

 Sure, we can add a call to "assumeSafeAppend()" everywhere. And I mean  
 *everywhere*. Every single instance of array creation or concatentation,  
 without exception. Almost every array in our codebase is affected by  
 this.
Only array shrinking. Concatenation always allocates. Doing it after append is redundant. Doing it after creation is redundant. Doing it *before* appending is possibly proactive, but I think just doing it whenever the length might shrink is good enough. I think it may be best to introduce a new array property: dest.slength = src.length; // same as dest.length = src.length, but follows D1 rules (slength = stomp length) If you use slicing (I'm assuming you do), then appending would have to become a function/member instead of ~=. I can help write those if you want. -Steve
Mar 14 2014
parent reply "bearophile" <bearophileHUGS lycos.com> writes:
Steven Schveighoffer:

 I think it may be best to introduce a new array property:

 dest.slength = src.length; // same as dest.length = src.length, 
 but follows D1 rules (slength = stomp length)

 If you use slicing (I'm assuming you do), then appending would 
 have to become a function/member instead of ~=.

 I can help write those if you want.
Eventually the D1 legacy will go away, so probably it's better to introduce something that later can be removed safely. So instead of "slength" it's better a name like "__slength", to allow its usage only if you compile with the "-d" compile switch (that activates deprecated features) and instead of a built-in array property as "length" it could be better (if possible) to make it a function in the object module as assumeSafeAppend. Bye, bearophile
Mar 14 2014
parent "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Fri, 14 Mar 2014 10:00:23 -0400, bearophile <bearophileHUGS lycos.com>  
wrote:

 Steven Schveighoffer:

 I think it may be best to introduce a new array property:

 dest.slength = src.length; // same as dest.length = src.length, but  
 follows D1 rules (slength = stomp length)

 If you use slicing (I'm assuming you do), then appending would have to  
 become a function/member instead of ~=.

 I can help write those if you want.
Eventually the D1 legacy will go away, so probably it's better to introduce something that later can be removed safely. So instead of "slength" it's better a name like "__slength", to allow its usage only if you compile with the "-d" compile switch (that activates deprecated features) and instead of a built-in array property as "length" it could be better (if possible) to make it a function in the object module as assumeSafeAppend.
I hope you understand that I was not recommending adding this to druntime or phobos. This is not a typical requirement. Anyone starting with D2 code would use something other than straight array-slices to do this kind of work. -Steve
Mar 14 2014
prev sibling parent Walter Bright <newshound2 digitalmars.com> writes:
On 3/14/2014 3:06 AM, Don wrote:
 Our entire codebase assumes that stomping will happen. Simplest example:

 T[] dupArray(T)(ref T[] dest, T[] src)
 {
      dest.length = src.length;
      if (src.length) {
          dest[] = src[];
      }
      return dest;
 }

 This is equivalent to dest = src.dup, but if dest was already long enough to
 contain src, no allocation occurs.

 Sure, we can add a call to "assumeSafeAppend()" everywhere. And I mean
 *everywhere*. Every single instance of array creation or concatentation,
without
 exception. Almost every array in our codebase is affected by this.
I see. I'd start by replacing T[] with: alias T[] Q; and then using Q instead of T[]. Once that is done, you can experiment with different implementations of Q. In my own D code, I've found this to be very effective. It's great to be able to swap in/out utterly different implementations of Q and measure performance.
Mar 14 2014
prev sibling next sibling parent "Paolo Invernizzi" <paolo.invernizzi no.address> writes:
On Thursday, 13 March 2014 at 04:24:01 UTC, Manu wrote:
 You and Andrei are the only resistance in this thread so far. 
 Why don't you
 ask 'temperamental client' what their opinion is? Give them a 
 heads up,
 perhaps they'll be more reasonable than you anticipate?
 Both myself and Don have stated on behalf of industrial clients 
 that we
 embrace breaking changes that move the language forward, or 
 correct clearly
 identifiable mistakes.
As the CTO of SR Labs, with a huge D2 codebase in production products, I agree with Manu, and I'm stating that we will be happy to spend some time updating deprecated features if the goal is a better language. - Paolo Invernizzi
Mar 13 2014
prev sibling parent Xavier Bigand <flamaros.xavier gmail.com> writes:
Le 13/03/2014 05:23, Manu a écrit :
 On 13 March 2014 11:13, Walter Bright <newshound2 digitalmars.com
 <mailto:newshound2 digitalmars.com>> wrote:

     On 3/12/2014 5:40 PM, Vladimir Panteleev wrote:

         Doesn't this sort of seal the language's fate in the long run,
         though?
         Eventually, new programming languages will appear which will
         learn from D's
         mistakes, and no new projects will be written in D.

         Wasn't it here that I heard that a language which doesn't evolve
         is a dead
         language?

           From looking at the atmosphere in this newsgroup, at least to
         me it appears
         obvious that there are, in fact, D users who would be glad to
         have their D code
         broken if it means that it will end up being written in a better
         programming
         language. (I'm one of them, for the record; I regularly break my
         own code anyway
         when refactoring my library.) Although I'm not advocating
         forking off a D3 here
         and now, the list of things we wish we could fix is only going
         to grow.


     There are costs and benefits:

     Benefits:

     1. attracting new people with better features

     Costs:

     2. losing existing users by breaking their code, losing new people
     because of a reputation for instability

     There aren't clearcut answers. It's a judgement call. The
     final-by-default has very large breakage costs, and its improvement
     is minor and achievable by other means.


 It's not minor, and it's not achievable by other means though.
 It's also not a very large breaking change. In relative terms, it's
 already quantified - expected to be much smaller than override was; this
 only affects bases, override affected all branches and leaves.

 You and Andrei are the only resistance in this thread so far. Why don't
 you ask 'temperamental client' what their opinion is? Give them a heads
 up, perhaps they'll be more reasonable than you anticipate?
 Both myself and Don have stated on behalf of industrial clients that we
 embrace breaking changes that move the language forward, or correct
 clearly identifiable mistakes.

 One of the key advantages to D over other languages in my mind is
 precisely it's fluidity. The idea that this is a weakness doesn't
 resonate with me in any way (assuming that it's managed in a
 sensible/predictable/well-communicated manner).
 I *like* fixing my code when a breaking change fixes something that was
 unsatisfactory, and it seems that most others present feel this way too.
 I've used C++ for a long time, I know very well how much I hate carrying
 language baggage to the end of my years.
I am completely agree without you, it's the biggest issue of c++ to be so slow to evolve it's like a diplodocus. I though D was born to fix this issue. Why c++ includes wasn't replaced by imports during last 20 years? It's cool bigger companies can distribute compilation but all small companies just can't and have to wait so much time to have a binary. I don't want to see anymore bad decisions living so much times, it's normal to make errors but a critical mistake to doesn't fix it. All is about the rythm when you force D developers to update their old code. Some constructors like Nintendo or Apple let you few month to migrate your code, else submissions will be refused.
 That said, obviously there's a big difference between random breakage
 and controlled deprecation.
 The 2 need to stop being conflated. I don't think they are the same
 thing, and can't reasonably be compared.
 I've never heard anybody object to the latter if it's called for. The
 std.json example you raise is a clear example of the former.
Mar 13 2014
prev sibling parent reply Xavier Bigand <flamaros.xavier gmail.com> writes:
Le 13/03/2014 02:13, Walter Bright a écrit :
 On 3/12/2014 5:40 PM, Vladimir Panteleev wrote:
 Doesn't this sort of seal the language's fate in the long run, though?
 Eventually, new programming languages will appear which will learn
 from D's
 mistakes, and no new projects will be written in D.

 Wasn't it here that I heard that a language which doesn't evolve is a
 dead
 language?

  From looking at the atmosphere in this newsgroup, at least to me it
 appears
 obvious that there are, in fact, D users who would be glad to have
 their D code
 broken if it means that it will end up being written in a better
 programming
 language. (I'm one of them, for the record; I regularly break my own
 code anyway
 when refactoring my library.) Although I'm not advocating forking off
 a D3 here
 and now, the list of things we wish we could fix is only going to grow.
There are costs and benefits: Benefits: 1. attracting new people with better features Costs: 2. losing existing users by breaking their code, losing new people because of a reputation for instability There aren't clearcut answers. It's a judgement call. The final-by-default has very large breakage costs, and its improvement is minor and achievable by other means.
IMO the major issue with changes is that they are propagated on new versions of dmd with other fixes. Why compiler fixes aren't back-ported on some old dmd versions? This will let opportunity to get fixes without breaking changes when a D users is to close of making a release. I know it's a lot additional work for the D community but it will offer more choices. On other idea is to create a major revision of D each year (maybe close after dconf) and present the previous one as the stable. The stable will receive all fixes and non-breaking changes. This rhythm of one version per year will decrease in the time due to a lower necessity to make breaking changes. Doing things like that will allow companies to migrate to the next stable D version progressively (with version checks). We did something similar at my job with Qt, because we start with the version 4.8 and necessitas (unofficial android support). At the same time we had worked on 5.0 alpha integration (I did some bugs reports to help his development)
Mar 13 2014
parent reply "Daniel Murphy" <yebbliesnospam gmail.com> writes:
"Xavier Bigand"  wrote in message news:lfslr2$1tu6$1 digitalmars.com...

 IMO the major issue with changes is that they are propagated on new 
 versions of dmd with other fixes. Why compiler fixes aren't back-ported on 
 some old dmd versions? This will let opportunity to get fixes without 
 breaking changes when a D users is to close of making a release.
Because...
 I know it's a lot additional work for the D community but it will offer 
 more choices.
This is why. Nobody has volunteered to do it. It will happen when somebody who wants it actually does it.
Mar 13 2014
parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 3/13/14, 9:28 AM, Daniel Murphy wrote:
 "Xavier Bigand"  wrote in message news:lfslr2$1tu6$1 digitalmars.com...
 I know it's a lot additional work for the D community but it will
 offer more choices.
This is why. Nobody has volunteered to do it. It will happen when somebody who wants it actually does it.
Word. I can't emphasize this enough. Andrei
Mar 13 2014
prev sibling next sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 3/12/14, 5:40 PM, Vladimir Panteleev wrote:
 On Thursday, 13 March 2014 at 00:18:06 UTC, Andrei Alexandrescu wrote:
 On 3/12/14, 5:02 PM, Chris Williams wrote:
 As someone who would like to be able to use D as a language,
 professionally, it's more important to me that D gain future clients
 than that it maintains the ones that it has. Even more important is that
 it does both of those things.
The saying goes, "you can't make a bucket of yogurt without a spoonful of rennet". The pattern of resetting customer code into the next version must end. It's the one thing that both current and future users want: a pattern of stability and reliability.
Doesn't this sort of seal the language's fate in the long run, though? Eventually, new programming languages will appear which will learn from D's mistakes, and no new projects will be written in D.
Let's get to the point where we need to worry about that :o).
 Wasn't it here that I heard that a language which doesn't evolve is a
 dead language?
Evolving is different from incessantly changing.
 From looking at the atmosphere in this newsgroup, at least to me it
 appears obvious that there are, in fact, D users who would be glad to
 have their D code broken if it means that it will end up being written
 in a better programming language.
This is not my first gig. Due to simple social dynamics, forum participation saturates. In their heydays, forums like comp.lang.c++.moderated, comp.lang.tex, and comp.lang.perl had traffic comparable to ours, although their community was 1-2 orders of magnitude larger. Although it seems things are business as usual in our little hood here, there is a growing silent majority of D users who aren't on the forum. Andrei
Mar 12 2014
next sibling parent reply "Vladimir Panteleev" <vladimir thecybershadow.net> writes:
On Thursday, 13 March 2014 at 02:48:14 UTC, Andrei Alexandrescu 
wrote:
 On 3/12/14, 5:40 PM, Vladimir Panteleev wrote:
 From looking at the atmosphere in this newsgroup, at least to 
 me it
 appears obvious that there are, in fact, D users who would be 
 glad to
 have their D code broken if it means that it will end up being 
 written
 in a better programming language.
This is not my first gig. Due to simple social dynamics, forum participation saturates. In their heydays, forums like comp.lang.c++.moderated, comp.lang.tex, and comp.lang.perl had traffic comparable to ours, although their community was 1-2 orders of magnitude larger. Although it seems things are business as usual in our little hood here, there is a growing silent majority of D users who aren't on the forum.
So are you saying that the users who participate on the forum are not representative of the entire D user base? I.e. the % of users who wouldn't mind breaking changes is higher on the forum?
Mar 12 2014
next sibling parent "Meta" <jared771 gmail.com> writes:
On Thursday, 13 March 2014 at 03:05:07 UTC, Vladimir Panteleev 
wrote:
 So are you saying that the users who participate on the forum 
 are not representative of the entire D user base? I.e. the % of 
 users who wouldn't mind breaking changes is higher on the forum?
The 1% rules suggests this forum is a small minority amongst D users. http://en.wikipedia.org/wiki/1%25_rule_%28Internet_culture%29
Mar 12 2014
prev sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 3/12/14, 8:05 PM, Vladimir Panteleev wrote:
 On Thursday, 13 March 2014 at 02:48:14 UTC, Andrei Alexandrescu wrote:
 On 3/12/14, 5:40 PM, Vladimir Panteleev wrote:
 From looking at the atmosphere in this newsgroup, at least to me it
 appears obvious that there are, in fact, D users who would be glad to
 have their D code broken if it means that it will end up being written
 in a better programming language.
This is not my first gig. Due to simple social dynamics, forum participation saturates. In their heydays, forums like comp.lang.c++.moderated, comp.lang.tex, and comp.lang.perl had traffic comparable to ours, although their community was 1-2 orders of magnitude larger. Although it seems things are business as usual in our little hood here, there is a growing silent majority of D users who aren't on the forum.
So are you saying that the users who participate on the forum are not representative of the entire D user base?
They are representative in the sense they're among the most passionate, competent, and influential of the user base. For the same reasons they are a couple of standard deviations away in certain respects from the majority.
 I.e. the % of users who
 wouldn't mind breaking changes is higher on the forum?
I believe so, and I have many examples. Most people who don't hang out in the forum just want to get work done without minding every single language advocacy subtlety, and breakages prevent them from getting work done. Andrei
Mar 12 2014
next sibling parent reply "Vladimir Panteleev" <vladimir thecybershadow.net> writes:
On Thursday, 13 March 2014 at 03:15:25 UTC, Andrei Alexandrescu 
wrote:
 I believe so, and I have many examples. Most people who don't 
 hang out in the forum just want to get work done without 
 minding every single language advocacy subtlety, and breakages 
 prevent them from getting work done.
I see. That's quite insightful, thanks.
Mar 12 2014
parent "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Thu, Mar 13, 2014 at 03:17:39AM +0000, Vladimir Panteleev wrote:
 On Thursday, 13 March 2014 at 03:15:25 UTC, Andrei Alexandrescu
 wrote:
I believe so, and I have many examples. Most people who don't hang
out in the forum just want to get work done without minding every
single language advocacy subtlety, and breakages prevent them from
getting work done.
I see. That's quite insightful, thanks.
Just to add another data point, FWIW: recently due to being busy with other matters I haven't been able to keep up with this forum, or do very much with D, but I do still write D code every now and then, and I do have some existing D code that I use regularly. Not long ago, a subtle (or not-so-subtle!) change in File.ByLine caused my code to break in a major, visible way -- the code didn't fail to compile, but the semantics changed. I found this extremely upsetting, because things like file I/O are something one expects to be stable in the long term, not reinvented every other release. (In fact, I posted about this in another thread, as some of you may recall.) The change in question was a Phobos refactoring that, arguably, improved the quality of the code by eliminating some ugly hacks that were ostensibly causing "buggy" behaviour. However, my code had come to depend on that "buggy" behaviour, and having it break like that while I was trying to "get work done" was not a pleasant experience, in spite of the fact that I am generally sympathetic to those clamoring to gradual language refinement via deprecation and redesign. (I even pushed for such breakages at times.) I can only imagine the reaction of someone who doesn't frequent this forum experiencing something like this upon upgrading DMD. So I'd say the cost of breakage is very real, and is not something to be lightly dismissed. If I found it hard to swallow in spite of being sympathetic to those who want the language to improve by occasionally breakage of deprecated features, how much more will "normal" D users feel when things break from under their existing code? (Now of course I'm not saying we shouldn't ever introduce any breaking changes -- I know of at least one occasion where I was actually *pleased* by a particular breaking change -- but I'm saying that we need to weigh the costs carefully and tread very lightly, because this is just the kind of thing that will turn people away from D.) T -- They pretend to pay us, and we pretend to work. -- Russian saying
Mar 12 2014
prev sibling parent "Joseph Cassman" <jc7919 outlook.com> writes:
On Thursday, 13 March 2014 at 03:15:25 UTC, Andrei Alexandrescu 
wrote:
 I.e. the % of users who
 wouldn't mind breaking changes is higher on the forum?
I believe so, and I have many examples. Most people who don't hang out in the forum just want to get work done without minding every single language advocacy subtlety, and breakages prevent them from getting work done. Andrei
Yeah this pretty much defines me. I got other stuff to deal with but am very interested in using D. Just can't devote much time to make my voice heard that loudly on the forums. From my point of view, D2 already has such a good set of features that it is sufficiently differentiated from the competition to make it a compelling product as-is. If the implementation could be polished to just work, that would help it to gain a reputation of stability. That in turn would allow D to gain traction in industry, publish a specification, build a third-party tooling and library ecosystem, etc. I applaud the restraint Walter and Andrei are showing in keeping the implementation on track. I believe this is a win in the long run. New features like final-by-default are good to consider. And for that feature in particular, I think the reasoning backing it up is compelling. But I say tabling it for consideration until D-next is the way to go to win the race, rather than just the sprint. Joseph
Mar 12 2014
prev sibling next sibling parent reply Manu <turkeyman gmail.com> writes:
On 13 March 2014 12:48, Andrei Alexandrescu
<SeeWebsiteForEmail erdani.org>wrote:

 On 3/12/14, 5:40 PM, Vladimir Panteleev wrote:

 On Thursday, 13 March 2014 at 00:18:06 UTC, Andrei Alexandrescu wrote:

 On 3/12/14, 5:02 PM, Chris Williams wrote:

 As someone who would like to be able to use D as a language,
 professionally, it's more important to me that D gain future clients
 than that it maintains the ones that it has. Even more important is that
 it does both of those things.
The saying goes, "you can't make a bucket of yogurt without a spoonful of rennet". The pattern of resetting customer code into the next version must end. It's the one thing that both current and future users want: a pattern of stability and reliability.
Doesn't this sort of seal the language's fate in the long run, though? Eventually, new programming languages will appear which will learn from D's mistakes, and no new projects will be written in D.
Let's get to the point where we need to worry about that :o). Wasn't it here that I heard that a language which doesn't evolve is a
 dead language?
Evolving is different from incessantly changing.
Again, trivialising the importance of this change. From looking at the atmosphere in this newsgroup, at least to me it
 appears obvious that there are, in fact, D users who would be glad to
 have their D code broken if it means that it will end up being written
 in a better programming language.
This is not my first gig. Due to simple social dynamics, forum participation saturates. In their heydays, forums like comp.lang.c++.moderated, comp.lang.tex, and comp.lang.perl had traffic comparable to ours, although their community was 1-2 orders of magnitude larger. Although it seems things are business as usual in our little hood here, there is a growing silent majority of D users who aren't on the forum.
Are you suggesting that only we in this thread care about this, at the expense of that growing silent majority? Many of the new user's I've noticed appearing are from my industry. There are seemingly many new gamedevs or ambitious embedded/mobile users. The recent flurry of activity on the cross-compilers, Obj-C, is a clear demonstration of that interest. I suspect they are a significant slice of the growing majority, and certainly of the growing potential. They care about this, whether they know it or not. Most users aren't low-level experts, even though it matters to their projects. I want to know what you think the potential or likely future breakdown of industrial application of D looks like? I have a suspicion that when the cross compilers are robust and word gets out, you will see a surge of game/realtime/mobile devs, and I don't think it's unrealistic, or even unlikely, to imagine that this may be D's largest developer audience at some time in the (not too distant?) future. It's the largest native-code industry left by far, requirements are not changing, and there are no other realistic alternatives I'm aware of on the horizon. Every other facet of software development I can think of has competition in the language space.
Mar 12 2014
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 3/12/14, 9:40 PM, Manu wrote:
 On 13 March 2014 12:48, Andrei Alexandrescu
 <SeeWebsiteForEmail erdani.org <mailto:SeeWebsiteForEmail erdani.org>>
 wrote:

     On 3/12/14, 5:40 PM, Vladimir Panteleev wrote:

         On Thursday, 13 March 2014 at 00:18:06 UTC, Andrei Alexandrescu
         wrote:

             On 3/12/14, 5:02 PM, Chris Williams wrote:

                 As someone who would like to be able to use D as a language,
                 professionally, it's more important to me that D gain
                 future clients
                 than that it maintains the ones that it has. Even more
                 important is that
                 it does both of those things.


             The saying goes, "you can't make a bucket of yogurt without
             a spoonful
             of rennet". The pattern of resetting customer code into the next
             version must end. It's the one thing that both current and
             future
             users want: a pattern of stability and reliability.


         Doesn't this sort of seal the language's fate in the long run,
         though?
         Eventually, new programming languages will appear which will
         learn from
         D's mistakes, and no new projects will be written in D.


     Let's get to the point where we need to worry about that :o).


         Wasn't it here that I heard that a language which doesn't evolve
         is a
         dead language?


     Evolving is different from incessantly changing.


 Again, trivialising the importance of this change.

          >From looking at the atmosphere in this newsgroup, at least to
         me it
         appears obvious that there are, in fact, D users who would be
         glad to
         have their D code broken if it means that it will end up being
         written
         in a better programming language.


     This is not my first gig. Due to simple social dynamics, forum
     participation saturates. In their heydays, forums like
     comp.lang.c++.moderated, comp.lang.tex, and comp.lang.perl had
     traffic comparable to ours, although their community was 1-2 orders
     of magnitude larger. Although it seems things are business as usual
     in our little hood here, there is a growing silent majority of D
     users who aren't on the forum.


 Are you suggesting that only we in this thread care about this, at the
 expense of that growing silent majority?

 Many of the new user's I've noticed appearing are from my industry.
 There are seemingly many new gamedevs or ambitious embedded/mobile
 users. The recent flurry of activity on the cross-compilers, Obj-C, is a
 clear demonstration of that interest.
 I suspect they are a significant slice of the growing majority, and
 certainly of the growing potential. They care about this, whether they
 know it or not. Most users aren't low-level experts, even though it
 matters to their projects.

 I want to know what you think the potential or likely future breakdown
 of industrial application of D looks like?

 I have a suspicion that when the cross compilers are robust and word
 gets out, you will see a surge of game/realtime/mobile devs, and I don't
 think it's unrealistic, or even unlikely, to imagine that this may be
 D's largest developer audience at some time in the (not too distant?)
 future.
 It's the largest native-code industry left by far, requirements are not
 changing, and there are no other realistic alternatives I'm aware of on
 the horizon. Every other facet of software development I can think of
 has competition in the language space.
I hear you. Time to put this in a nice but firm manner: your arguments were understood but did not convince. The matter has been settled. There will be no final by default in the D programming language. Hope you understand. Thanks, Andrei
Mar 12 2014
next sibling parent reply "John Colvin" <john.loughran.colvin gmail.com> writes:
On Thursday, 13 March 2014 at 04:58:05 UTC, Andrei Alexandrescu 
wrote:
 I hear you. Time to put this in a nice but firm manner: your 
 arguments were understood but did not convince. The matter has 
 been settled. There will be no final by default in the D 
 programming language. Hope you understand.


 Thanks,

 Andrei
In light of this and as a nod to Manu's expertise and judgment on the matter: We should make his reasoning on the importance of deliberately choosing virtual vs private in API-public classes prominent in documentation, wikis, books and other learning materials. It may not be an important enough to justify a large language break, but if Manu says it is genuinely a problem in his industry, we should do our best to alleviate as much as is reasonable.
Mar 13 2014
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 3/13/14, 2:15 AM, John Colvin wrote:
 On Thursday, 13 March 2014 at 04:58:05 UTC, Andrei Alexandrescu wrote:
 I hear you. Time to put this in a nice but firm manner: your arguments
 were understood but did not convince. The matter has been settled.
 There will be no final by default in the D programming language. Hope
 you understand.


 Thanks,

 Andrei
In light of this and as a nod to Manu's expertise and judgment on the matter: We should make his reasoning on the importance of deliberately choosing virtual vs private in API-public classes prominent in documentation, wikis, books and other learning materials. It may not be an important enough to justify a large language break, but if Manu says it is genuinely a problem in his industry, we should do our best to alleviate as much as is reasonable.
I think that's a great idea. Andrei
Mar 13 2014
parent Joseph Rushton Wakeling <joseph.wakeling webdrake.net> writes:
On 13/03/14 16:22, Andrei Alexandrescu wrote:
 On 3/13/14, 2:15 AM, John Colvin wrote:
 In light of this and as a nod to Manu's expertise and judgment on the
 matter:

 We should make his reasoning on the importance of deliberately choosing
 virtual vs private in API-public classes prominent in documentation,
 wikis, books and other learning materials.

 It may not be an important enough to justify a large language break, but
 if Manu says it is genuinely a problem in his industry, we should do our
 best to alleviate as much as is reasonable.
I think that's a great idea.
Related suggestion. I know that Walter really doesn't like compiler warnings, and to a large degree I understand his dislike. However, in this case I think we could do much to alleviate the negative effects of virtual-by-default by making it a compiler warning for a class method to be without an explicit indicator of whether it's to be final or virtual. That warning would have to be tolerant of e.g. the whole class itself being given a "final" or "virtual" marker, or of tags like "final:" or "virtual:" which capture multiple methods. The warning could include an indication to the user: "If you're not certain which is preferable, pick final." The idea would be that it be a strongly enforced D style condition to be absolutely explicit about your intentions final- and virtual-wise. (If a compiler warning is considered too strong, then the task could be given to a lint tool.)
Mar 16 2014
prev sibling parent reply "Joseph Rushton Wakeling" <joseph.wakeling gmail.com> writes:
On Thursday, 13 March 2014 at 04:58:05 UTC, Andrei Alexandrescu 
wrote:
 I hear you. Time to put this in a nice but firm manner: your 
 arguments were understood but did not convince.
The problem is that this remark could be made in both directions. I understand some of the motivation for this decision, but the way it's been announced and rationalized is very problematic. That naturally leads to questions about whether it's the right decision or not, and to be honest, I don't think the follow-ups from you and Walter have adequately addressed those concerns. Problem 1 -- the announcement as made gives the impression that a known, planned, desirable breaking change with a well-defined deprecation path is to be cancelled because of a client's response to an unplanned and unannounced breakage. You need to make the case for why well-signposted, well-executed deprecation paths are a problem that sits on the same level as the kind of unexpected breakage this client encountered. Problem 2 -- perhaps there's a broader context that you can't discuss with us because of commercial confidentiality, but the impression given is that this decision has been taken substantially in reaction to one bad client response. This gives the impression of a knee-jerk reaction made under stress rather than a balanced decision-making process. More so because it's not clear if the client would have the same problem with a well-executed deprecation process. Problem 3 -- I don't think this decision has adequately acknowledged the original rationale for favouring final-by-default. Walter has discussed the speed concern, but that was not the killer argument -- the one which swung the day was the fact that final-by-default makes it easier to avoid making breaking changes in future -- see e.g.: http://forum.dlang.org/thread/pzysdctqxjadoraeexaa forum.dlang.org?page=10#post-mailman.246.1386164839.3242.digitalmars-d:40puremagic.com http://www.artima.com/intv/nonvirtualP.html So, if avoiding breaking change in future is a strong goal, allowing the transition to final-by-default is a clear contribution to that goal. Finally, I'd say that to my mind, these kinds of announcements-by-fiat that come out of the blue and without warning, while not as bad as unexpected code breakage, are still pretty bad for the D user community. We need to be able to have trust in the firm decisions and understandings reached here in community discussions, that either they will be adhered to or that there will be prior notice and discussion before any counter-decision is finalized. This is as much part of stability and reliability as the code in the compiler and the libraries. Best wishes, -- Joe
Mar 13 2014
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 3/13/14, 7:48 AM, Joseph Rushton Wakeling wrote:
 On Thursday, 13 March 2014 at 04:58:05 UTC, Andrei Alexandrescu wrote:
 I hear you. Time to put this in a nice but firm manner: your arguments
 were understood but did not convince.
The problem is that this remark could be made in both directions. I understand some of the motivation for this decision, but the way it's been announced and rationalized is very problematic. That naturally leads to questions about whether it's the right decision or not, and to be honest, I don't think the follow-ups from you and Walter have adequately addressed those concerns.
At a level it's clear it's not a matter of right or wrong but instead a judgment call, right? Successful languages go with either default.
 Problem 1 -- the announcement as made gives the impression that a known,
 planned, desirable breaking change with a well-defined deprecation path
 is to be cancelled because of a client's response to an unplanned and
 unannounced breakage.  You need to make the case for why
 well-signposted, well-executed deprecation paths are a problem that sits
 on the same level as the kind of unexpected breakage this client
 encountered.
The breakage was given as an example. We would have decided the same without that happening.
 Problem 2 -- perhaps there's a broader context that you can't discuss
 with us because of commercial confidentiality, but the impression given
 is that this decision has been taken substantially in reaction to one
 bad client response.  This gives the impression of a knee-jerk reaction
 made under stress rather than a balanced decision-making process.  More
 so because it's not clear if the client would have the same problem with
 a well-executed deprecation process.
More than sure a well-executed deprecation process helps although it's not perfect. We're not encumbered by exhausting confidentiality requirements etc.
 Problem 3 -- I don't think this decision has adequately acknowledged the
 original rationale for favouring final-by-default.  Walter has discussed
 the speed concern, but that was not the killer argument -- the one which
 swung the day was the fact that final-by-default makes it easier to
 avoid making breaking changes in future -- see e.g.:
 http://forum.dlang.org/thread/pzysdctqxjadoraeexaa forum.dlang.org?page=10#post-mailman.246.1386164839.3242.digitalmars-d:40puremagic.com

 http://www.artima.com/intv/nonvirtualP.html

 So, if avoiding breaking change in future is a strong goal, allowing the
 transition to final-by-default is a clear contribution to that goal.
There's some underlying assumption here that if we "really" understood the arguments we'd be convinced. Speaking for myself, I can say I understand the arguments very well. I don't know how to acknowledge them better than I've already done.
 Finally, I'd say that to my mind, these kinds of announcements-by-fiat
 that come out of the blue and without warning, while not as bad as
 unexpected code breakage, are still pretty bad for the D user
 community.  We need to be able to have trust in the firm decisions and
 understandings reached here in community discussions, that either they
 will be adhered to or that there will be prior notice and discussion
 before any counter-decision is finalized.  This is as much part of
 stability and reliability as the code in the compiler and the libraries.
Thanks for being candid about this. I have difficulty, however, picturing how to do a decision point better. At some point a decision will be made. It's a judgment call, that in some reasonable people's opinion, is wrong, and in some other reasonable people's opinion, is right. For such, we're well past arguments' time - no amount of arguing would convince. I don't see how to give better warning about essentially a Boolean decision point that precludes pursuing the converse design path. Andrei
Mar 13 2014
next sibling parent reply "monarch_dodra" <monarchdodra gmail.com> writes:
On Thursday, 13 March 2014 at 15:57:22 UTC, Andrei Alexandrescu 
wrote:
 At a level it's clear it's not a matter of right or wrong but 
 instead a judgment call, right? Successful languages go with 
 either default.

 Andrei
For what it's worth, "education" could help solve this problem: For example, "people" where surprised by the context pointer in nested structs, and now (I think), it is (or will become) second nature to always type: static struct S {...} Heck, I even do it for my global structs too now. So maybe the issue could be solved by educating to always type (by default): final class A {...} The first step in this direction would be to update the documentation to add said final in as many places where it makes sense, and then to do the same thing to the Phobos code base.
Mar 13 2014
parent reply Dmitry Olshansky <dmitry.olsh gmail.com> writes:
13-Mar-2014 21:52, monarch_dodra пишет:
 On Thursday, 13 March 2014 at 15:57:22 UTC, Andrei Alexandrescu wrote:
 At a level it's clear it's not a matter of right or wrong but instead
 a judgment call, right? Successful languages go with either default.

 Andrei
For what it's worth, "education" could help solve this problem: For example, "people" where surprised by the context pointer in nested structs, and now (I think), it is (or will become) second nature to always type: static struct S {...} Heck, I even do it for my global structs too now. So maybe the issue could be solved by educating to always type (by default): final class A {...}
Not the same as `final:` inside - the above just means you can't inherit from A. Funnily enough any new methods in A would still be virtual even if nothing can inherit from it! -- Dmitry Olshansky
Mar 13 2014
next sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 3/13/14, 12:38 PM, Dmitry Olshansky wrote:
 Not the same as `final:` inside - the above just means you can't inherit
 from A. Funnily enough any new methods in A would still be virtual even
 if nothing can inherit from it!
Looks like we could improve on this. Is there an enhancement request available? Andrei
Mar 13 2014
parent reply Dmitry Olshansky <dmitry.olsh gmail.com> writes:
13-Mar-2014 23:40, Andrei Alexandrescu пишет:
 On 3/13/14, 12:38 PM, Dmitry Olshansky wrote:
 Not the same as `final:` inside - the above just means you can't inherit
 from A. Funnily enough any new methods in A would still be virtual even
 if nothing can inherit from it!
Looks like we could improve on this. Is there an enhancement request available?
None that I know of. P.S. I have a strong distaste to the way OOP is currently designed in D anyway. No, I don't propose to change it. -- Dmitry Olshansky
Mar 13 2014
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 3/13/14, 12:49 PM, Dmitry Olshansky wrote:
 13-Mar-2014 23:40, Andrei Alexandrescu пишет:
 On 3/13/14, 12:38 PM, Dmitry Olshansky wrote:
 Not the same as `final:` inside - the above just means you can't inherit
 from A. Funnily enough any new methods in A would still be virtual even
 if nothing can inherit from it!
Looks like we could improve on this. Is there an enhancement request available?
None that I know of.
Just played with some code a bit, turns out at least gdc inlines calls for final classes appropriately: http://goo.gl/oV8EYu. What would be a reliable way to detect that a vtable entry is generated?
 P.S. I have a strong distaste to the way OOP is currently designed in D
 anyway. No, I don't propose to change it.
I take it that overridable by default is part of it. What other things are not to your liking? Andrei
Mar 13 2014
parent reply Dmitry Olshansky <dmitry.olsh gmail.com> writes:
14-Mar-2014 00:00, Andrei Alexandrescu пишет:
 On 3/13/14, 12:49 PM, Dmitry Olshansky wrote:
 13-Mar-2014 23:40, Andrei Alexandrescu пишет:
 On 3/13/14, 12:38 PM, Dmitry Olshansky wrote:
 Not the same as `final:` inside - the above just means you can't
 inherit
 from A. Funnily enough any new methods in A would still be virtual even
 if nothing can inherit from it!
Looks like we could improve on this. Is there an enhancement request available?
None that I know of.
Just played with some code a bit, turns out at least gdc inlines calls for final classes appropriately: http://goo.gl/oV8EYu. What would be a reliable way to detect that a vtable entry is generated?
Mistake on my part. This: final class A { int i; void f() { ++i; } void g() { ++i; } } pragma(msg, __traits(isFinalFunction, A.g)); pragma(msg, __traits(isFinalFunction, A.f)); Prints: true true during compilation, which I take as no vtable entries should be generated.
 P.S. I have a strong distaste to the way OOP is currently designed in D
 anyway. No, I don't propose to change it.
I take it that overridable by default is part of it. What other things are not to your liking?
In no particular order. 1. Qualifiers apply to both reference and instance of a class. Recall the ref Object problem and Rebindable!T in Phobos. (Would be far more fair to simply admit that classes and OOP are about MUTABLE state and close the whole const problem (of instance itself) with them) 2. Related. Apparently OOP was not ready nor prepared to TLS by default scheme. Monitor field per class is an indication of the said fact. 3. Memory management policy coupled with polymorphism. I have plenty of use-cases where providing polymorphic interface is great, but there is no use in GCing these objects. No emplace and dirty work is not a good way out. 4. There is a TypeInfo per class but no type-switch? Just ahwww. 5. Casts look just like plain casts but in fact do dynamic casts. And there is no static one in the sight, nor _documented_ way to do it. *cast(void**)&object is not nice 6. Related to the 1st one - since class instance and reference are conflated there is no way to do 'composition': class A{ ... } class B{ A a; // this is delegation, a pointer (!) void foo(){ a.foo(); .. } } Memory layout goes under the bus. There should have been a way to put the contents of class A in the same memory area as B instance. 7. Related to 6. Empty base class optimization, I mean in general using 0 _extra_ bytes per empty class inside of another one. Mostly can't be done because of : a) lack of (any form of) multiple inheritance b) delegation instead of composition, see also 6 8. No this-call function pointers. Delegates are nice but what's the problem with: class A{ void foo(); } &A.foo where A is a class to return the equivalent of : extern(this_call) void foo(A _this); Having to conjure fake delegates and twiddle with context pointer is both inefficient and silly. ... ~~~ Personal matter of taste ~~~ 1. This is not taking the fact that I simply do not see sense in going with anything beyond a fixed-function duo of: Traits (Extended interfaces, see e.g. Scala) and final/concrete classes. Any attempts to reuse code otherwise are welcome with plain composition. 2. The root Object in should be an opaque reference. Hashing, toString, opEquals, opCmp belong to standard traits/interfaces. 3. Multiple inheritance with linearizion of inheritance tree. -- Dmitry Olshansky
Mar 13 2014
parent Andrej Mitrovic <andrej.mitrovich gmail.com> writes:
On 3/13/14, Dmitry Olshansky <dmitry.olsh gmail.com> wrote:
 This:

 final class A {
      int i;
      void f() { ++i; }
      void g() { ++i; }

 }
 pragma(msg, __traits(isFinalFunction, A.g));
 pragma(msg, __traits(isFinalFunction, A.f));
Speaking of final classes, I've ran into this snippet a few weeks ago in src/gc/gc.d: ----- // This just makes Mutex final to de-virtualize member function calls. final class GCMutex : Mutex {} ----- But does this actually happen?
Mar 16 2014
prev sibling next sibling parent "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Thu, 13 Mar 2014 15:38:19 -0400, Dmitry Olshansky  =

<dmitry.olsh gmail.com> wrote:

 13-Mar-2014 21:52, monarch_dodra =D0=BF=D0=B8=D1=88=D0=B5=D1=82:
 final class A
 {...}
Not the same as `final:` inside - the above just means you can't inher=
it =
 from A. Funnily enough any new methods in A would still be virtual eve=
n =
 if nothing can inherit from it!
What? Where's that low hanging fruit again? -Steve
Mar 13 2014
prev sibling next sibling parent reply "monarch_dodra" <monarchdodra gmail.com> writes:
On Thursday, 13 March 2014 at 19:38:20 UTC, Dmitry Olshansky 
wrote:
 13-Mar-2014 21:52, monarch_dodra пишет:
 So maybe the issue could be solved by educating to always type 
 (by
 default):

 final class A
 {...}
Not the same as `final:` inside - the above just means you can't inherit from A. Funnily enough any new methods in A would still be virtual even if nothing can inherit from it!
What!? That's... horrible! What's the point of that? Is there *any* situation where that makes sense?
Mar 13 2014
parent "deadalnix" <deadalnix gmail.com> writes:
On Thursday, 13 March 2014 at 20:21:51 UTC, monarch_dodra wrote:
 On Thursday, 13 March 2014 at 19:38:20 UTC, Dmitry Olshansky 
 wrote:
 13-Mar-2014 21:52, monarch_dodra пишет:
 So maybe the issue could be solved by educating to always 
 type (by
 default):

 final class A
 {...}
Not the same as `final:` inside - the above just means you can't inherit from A. Funnily enough any new methods in A would still be virtual even if nothing can inherit from it!
What!? That's... horrible! What's the point of that? Is there *any* situation where that makes sense?
That is certainly a compiler bug if this is the case.
Mar 13 2014
prev sibling parent reply "Vladimir Panteleev" <vladimir thecybershadow.net> writes:
On Thursday, 13 March 2014 at 19:38:20 UTC, Dmitry Olshansky 
wrote:
 Not the same as `final:` inside - the above just means you 
 can't inherit from A. Funnily enough any new methods in A would 
 still be virtual even if nothing can inherit from it!
This should have been fixed years ago: https://d.puremagic.com/issues/show_bug.cgi?id=2326 Is it a regression?
Mar 13 2014
parent reply Dmitry Olshansky <dmitry.olsh gmail.com> writes:
14-Mar-2014 00:24, Vladimir Panteleev пишет:
 On Thursday, 13 March 2014 at 19:38:20 UTC, Dmitry Olshansky wrote:
 Not the same as `final:` inside - the above just means you can't
 inherit from A. Funnily enough any new methods in A would still be
 virtual even if nothing can inherit from it!
This should have been fixed years ago: https://d.puremagic.com/issues/show_bug.cgi?id=2326 Is it a regression?
I stand corrected. I double checked and indeed compiler produces direct calls. -- Dmitry Olshansky
Mar 13 2014
next sibling parent "monarch_dodra" <monarchdodra gmail.com> writes:
On Thursday, 13 March 2014 at 20:42:15 UTC, Dmitry Olshansky 
wrote:
 14-Mar-2014 00:24, Vladimir Panteleev пишет:
 On Thursday, 13 March 2014 at 19:38:20 UTC, Dmitry Olshansky 
 wrote:
 Not the same as `final:` inside - the above just means you 
 can't
 inherit from A. Funnily enough any new methods in A would 
 still be
 virtual even if nothing can inherit from it!
This should have been fixed years ago: https://d.puremagic.com/issues/show_bug.cgi?id=2326 Is it a regression?
I stand corrected. I double checked and indeed compiler produces direct calls.
Whew!
Mar 13 2014
prev sibling parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 3/13/14, 1:42 PM, Dmitry Olshansky wrote:
 14-Mar-2014 00:24, Vladimir Panteleev пишет:
 On Thursday, 13 March 2014 at 19:38:20 UTC, Dmitry Olshansky wrote:
 Not the same as `final:` inside - the above just means you can't
 inherit from A. Funnily enough any new methods in A would still be
 virtual even if nothing can inherit from it!
This should have been fixed years ago: https://d.puremagic.com/issues/show_bug.cgi?id=2326 Is it a regression?
I stand corrected. I double checked and indeed compiler produces direct calls.
My other question stands :o). Andrei
Mar 13 2014
prev sibling parent reply Joseph Rushton Wakeling <joseph.wakeling webdrake.net> writes:
On 13/03/14 16:57, Andrei Alexandrescu wrote:
 At a level it's clear it's not a matter of right or wrong but instead a
judgment
 call, right? Successful languages go with either default.
Sorry for the delay in responding here. Yes, it is a judgement call, and what's more, I think that probably just about all of us here recognize that you and Walter need to make such judgement calls sometimes, to mediate between the different requirements of different users. In this case, what really bothers me is less that I disagree with the judgement call (that happens), more that it was a decision reached without any kind of community engagement before finalizing it. This isn't meant as some kind of misguided call for democracy or voting or "respecting the community's wishes" -- the point is simply that every decision made without prior discussion and debate carries a social cost in terms of people's ability to make reliable plans for future development. This is particularly true when the decision (like this) is to reverse what most people seem to have understood was an accepted and agreed-on development goal.
 The breakage was given as an example. We would have decided the same without
 that happening.
Understood. I hope it's clear why this was not apparent from the original announcement.
 More than sure a well-executed deprecation process helps although it's not
 perfect. We're not encumbered by exhausting confidentiality requirements etc.
Thanks for clarifying that. I'm sorry if my question about this seemed excessively paranoid, but it really wasn't clear from the original announcement how much of the motivation for the decision arose out of client pressure. I felt it was better to ask rather than to be uncertain. Regarding deprecation processes: I do take your point that no matter how well planned, and no matter how obvious the deprecation path may seem, any managed change has the potential to cause unexpected breakage _simply because things are being changed_. On the other hand, one thing that's apparent is that while substantial parts of the language are now stable and settled, there _are_ still going to have to be breaking changes in future -- both to fix outright bugs, and in areas where judgement calls over the value of the change go the other way. So, I think there needs to be some better communication of the principles by which that threshold is determined. (Obviously people will still argue over whether that threshold has been reached or not, but if the general framework for deciding yes or no is well understood then it should defuse 90% of the arguments.)
 There's some underlying assumption here that if we "really" understood the
 arguments we'd be convinced. Speaking for myself, I can say I understand the
 arguments very well. I don't know how to acknowledge them better than I've
 already done.
Actually, rather the opposite -- I know you understand the arguments very well, and therefore I have much higher expectations in terms of how detailed an explanation I think you should be prepared to offer to justify your decisions ;-) In this case part of the problem is that we got the decision first and then the more detailed responses have come in the ensuing discussion. In that context I think it's pretty important to respond to questions about this or that bit of evidence by seeing them as attempts to understand your train of thought, rather than seeing them as assumptions that you don't understand something. That said, I think it's worth noting that in this discussion we have had multiple examples of genuinely different understandings -- not just different priorities -- about how certain language features may be used or how it's desirable to use them. So it's natural that people question whether all the relevant evidence was really considered.
 Thanks for being candid about this. I have difficulty, however, picturing how
to
 do a decision point better. At some point a decision will be made. It's a
 judgment call, that in some reasonable people's opinion, is wrong, and in some
 other reasonable people's opinion, is right. For such, we're well past
 arguments' time - no amount of arguing would convince. I don't see how to give
 better warning about essentially a Boolean decision point that precludes
 pursuing the converse design path.
I think that it's a mistake to see discussion as only being about pitching arguments or changing people's minds -- discussion is also necessary and useful to set the stage for a decision, to build understanding about why a decision is necessary and what are the factors that are informing it. One of the problems with this particular issue is that probably as far as most people are concerned, a discussion was had, people pitched arguments one way and the other, some positions became entrenched, other people changed their minds, and finally, a decision was made -- by you and Walter -- in favour of final by default. And even though some people disagreed with that, everyone could see the process by which that decision was reached, and everyone felt that their opinions had been taken into account. And that was that -- a decision. So, now, you and Walter have changed your minds -- which is fine in and of itself. The question is, is it right for you to simply overturn the old decision, or is it more appropriate for you to take the time to build a new consensus around the facts that have changed since the last one? My own take on that is that the maintainer's trump card is an essential but precious resource, and that generally speaking its use should be reserved for those situations where there is an absolutely intractable dispute between different points of view that cannot be resolved other than by the maintainer's decision. And I don't think that this is such a situation. Anyway, for what it's worth, here's how I think I would have gone about this (bearing in mind that hindsight is 20/20:-). I would not have jumped straight to the final-by-default decision, but would have laid out the problem that needs solving -- that we need to raise the bar much higher in terms of avoiding unexpected breaking change -- and then I'd have raised the related issue: "However, we also need to take a long hard look at the _planned_ breaking changes we have in mind, and give serious consideration to which of them are really necessary, and which are merely desirable." And I'd have asked for opinions, thoughts and so forth on this. And _that_ would have been the context in which it'd have been the right moment to raise the issue of final-by-default, and probably to justify its exclusion on the grounds that while it is very desirable, it is not actually fixing an unworkable situation -- merely a problematic one. I think that while in that context there would certainly still have been many concerned responses questioning the decision, but there wouldn't have been the feeling that it was a decision being imposed out of the blue for unclear reasons.
Mar 16 2014
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 3/16/14, 11:46 AM, Joseph Rushton Wakeling wrote:
 Actually, rather the opposite -- I know you understand the arguments
 very well, and therefore I have much higher expectations in terms of how
 detailed an explanation I think you should be prepared to offer to
 justify your decisions ;-)
I literally can't say anything more on the subject than I've already had. I've said it all. I could, of course, reiterate my considerations, and that would have other people reiterate theirs and their opinion on how my pros don't hold that much weight and how my cons hold a lot more weight than they should.
 One of the problems with this particular issue is that probably as far
 as most people are concerned, a discussion was had, people pitched
 arguments one way and the other, some positions became entrenched, other
 people changed their minds, and finally, a decision was made -- by you
 and Walter -- in favour of final by default.
For the record I never decided that way, which explains my surprise when I saw the pull request that adds "virtual". Upon discussing with Walter it became apparent that he made that decision against his own judgment. We are presently happy and confident we did the right thing. Please do not reply to this. Let sleeping dogs tell the truth. Thanks, Andrei
Mar 16 2014
parent "bearophile" <bearophileHUGS lycos.com> writes:
Andrei Alexandrescu:

 I literally can't say anything more on the subject than I've 
 already had. I've said it all. I could, of course, reiterate my 
 considerations, and that would have other people reiterate 
 theirs and their opinion on how my pros don't hold that much 
 weight and how my cons hold a lot more weight than they should.
But was the decision taken on the basic of experimental data? I mean the kind of data Daniel Murphy has shown here: http://forum.dlang.org/thread/lfqoan$5qq$1 digitalmars.com?page=27#post-lg147n:242u14:241:40digitalmars.com Bye, bearophile
Mar 16 2014
prev sibling parent Ziad Hatahet <hatahet gmail.com> writes:
On Wed, Mar 12, 2014 at 9:40 PM, Manu <turkeyman gmail.com> wrote:

 ... you will see a surge of game/realtime/mobile devs, and I don't think
 it's unrealistic, or even unlikely, to imagine that this may be D's largest
 developer audience at some time in the (not too distant?) future.
 It's the largest native-code industry left by far, requirements are not
 changing, and there are no other realistic alternatives I'm aware of on the
 horizon.
Rust?
Mar 12 2014
prev sibling next sibling parent reply "Mike" <none none.com> writes:
On Thursday, 13 March 2014 at 00:40:34 UTC, Vladimir Panteleev 
wrote:
 The saying goes, "you can't make a bucket of yogurt without a 
 spoonful of rennet". The pattern of resetting customer code 
 into the next version must end. It's the one thing that both 
 current and future users want: a pattern of stability and 
 reliability.
Doesn't this sort of seal the language's fate in the long run, though? Eventually, new programming languages will appear which will learn from D's mistakes, and no new projects will be written in D. Wasn't it here that I heard that a language which doesn't evolve is a dead language?
IMO, one of the reasons D exists is all the historical baggage C/C++ chose to carry instead of evolving. I can cite a business case I had the displeasure of working on as well. They chose not to evolve, and instead ended up spending $11 million years later to rewrite their infrastructure while maintaining the antiquated one so they could still function. And the latter was never realized. They're probably due for another large, time consuming, disruptive, expensive project in the near future. Point is, it's in the best interest of both languages and businesses building on those languages to evolve, or they just end up paying the piper later (with interest). Gradual, managed change is where it's at, IMNSHO.
Mar 12 2014
next sibling parent "Paulo Pinto" <pjmlp progtools.org> writes:
On Thursday, 13 March 2014 at 04:13:42 UTC, Mike wrote:
 On Thursday, 13 March 2014 at 00:40:34 UTC, Vladimir Panteleev 
 wrote:
 The saying goes, "you can't make a bucket of yogurt without a 
 spoonful of rennet". The pattern of resetting customer code 
 into the next version must end. It's the one thing that both 
 current and future users want: a pattern of stability and 
 reliability.
Doesn't this sort of seal the language's fate in the long run, though? Eventually, new programming languages will appear which will learn from D's mistakes, and no new projects will be written in D. Wasn't it here that I heard that a language which doesn't evolve is a dead language?
IMO, one of the reasons D exists is all the historical baggage C/C++ chose to carry instead of evolving.
I would say to ignore as well, given that Modula-2 was created in 1978 just as an example of better alternatives.
Mar 13 2014
prev sibling parent Xavier Bigand <flamaros.xavier gmail.com> writes:
Le 13/03/2014 05:13, Mike a écrit :
 On Thursday, 13 March 2014 at 00:40:34 UTC, Vladimir Panteleev wrote:
 The saying goes, "you can't make a bucket of yogurt without a
 spoonful of rennet". The pattern of resetting customer code into the
 next version must end. It's the one thing that both current and
 future users want: a pattern of stability and reliability.
Doesn't this sort of seal the language's fate in the long run, though? Eventually, new programming languages will appear which will learn from D's mistakes, and no new projects will be written in D. Wasn't it here that I heard that a language which doesn't evolve is a dead language?
IMO, one of the reasons D exists is all the historical baggage C/C++ chose to carry instead of evolving. I can cite a business case I had the displeasure of working on as well. They chose not to evolve, and instead ended up spending $11 million years later to rewrite their infrastructure while maintaining the antiquated one so they could still function. And the latter was never realized. They're probably due for another large, time consuming, disruptive, expensive project in the near future. Point is, it's in the best interest of both languages and businesses building on those languages to evolve, or they just end up paying the piper later (with interest). Gradual, managed change is where it's at, IMNSHO.
Just like I proposed creating a new major version every year will be gradual with a know rhythm and well announced.
Mar 13 2014
prev sibling next sibling parent reply "Sarath Kodali" <sarath dummy.com> writes:
On Thursday, 13 March 2014 at 00:40:34 UTC, Vladimir Panteleev 
wrote:
 Doesn't this sort of seal the language's fate in the long run, 
 though? Eventually, new programming languages will appear which 
 will learn from D's mistakes, and no new projects will be 
 written in D.
It won't happen that way if we evolve D such that it won't break existing code. Or manages it with long deprecation cycles. In 1989, ANSI C had come up with new function style without breaking old function style. After a while the new style has become the standard and the compilers gave warning messages. In D, we can issue "deprecated" messages.
 Wasn't it here that I heard that a language which doesn't 
 evolve is a dead language?

 From looking at the atmosphere in this newsgroup, at least to 
 me it appears obvious that there are, in fact, D users who 
 would be glad to have their D code broken if it means that it 
 will end up being written in a better programming language. 
 (I'm one of them, for the record; I regularly break my own code 
 anyway when refactoring my library.) Although I'm not 
 advocating forking off a D3 here and now, the list of things we 
 wish we could fix is only going to grow.
That is true if your code is under active development. What if you had a production code that was written 2 years back? - Sarath
Mar 12 2014
next sibling parent "Mike" <none none.com> writes:
On Thursday, 13 March 2014 at 04:56:31 UTC, Sarath Kodali wrote:
 That is true if your code is under active development. What if 
 you had a production code that was written 2 years back?

 - Sarath
Code that I wrote 2 years ago in GCC 4.7, is still compiled with the same compiler binary that I used to develop and test it. I don't upgrade my tools until I'm not ready handle the risk of change. But that doesn't prevent me from using the latest version of the compiler for my latest project. I also have projects still being built in Embedded Visual C++ 6 for the same reason. If evolution of the language and tools is a risk, I don't think it's wise to upgrade. Mike
Mar 12 2014
prev sibling parent Xavier Bigand <flamaros.xavier gmail.com> writes:
Le 13/03/2014 05:56, Sarath Kodali a écrit :
 On Thursday, 13 March 2014 at 00:40:34 UTC, Vladimir Panteleev wrote:
 Doesn't this sort of seal the language's fate in the long run, though?
 Eventually, new programming languages will appear which will learn
 from D's mistakes, and no new projects will be written in D.
It won't happen that way if we evolve D such that it won't break existing code. Or manages it with long deprecation cycles. In 1989, ANSI C had come up with new function style without breaking old function style. After a while the new style has become the standard and the compilers gave warning messages. In D, we can issue "deprecated" messages.
 Wasn't it here that I heard that a language which doesn't evolve is a
 dead language?

 From looking at the atmosphere in this newsgroup, at least to me it
 appears obvious that there are, in fact, D users who would be glad to
 have their D code broken if it means that it will end up being written
 in a better programming language. (I'm one of them, for the record; I
 regularly break my own code anyway when refactoring my library.)
 Although I'm not advocating forking off a D3 here and now, the list of
 things we wish we could fix is only going to grow.
That is true if your code is under active development. What if you had a production code that was written 2 years back? - Sarath
Why D was chosen for a production development 2 years back and development stopped? It's normally not an issue if : a) D is well known in the company b) Old D version still supported For me it's clearly an issue to use a young language in a short term development, from start you already know it will be difficult to maintain (to few knowledge in the company, security issues, bugs,...)
Mar 13 2014
prev sibling parent Nick Sabalausky <SeeWebsiteToContactMe semitwist.com> writes:
On 3/12/2014 8:40 PM, Vladimir Panteleev wrote:
 On Thursday, 13 March 2014 at 00:18:06 UTC, Andrei Alexandrescu wrote:
 The saying goes, "you can't make a bucket of yogurt without a spoonful
 of rennet". The pattern of resetting customer code into the next
 version must end. It's the one thing that both current and future
 users want: a pattern of stability and reliability.
Doesn't this sort of seal the language's fate in the long run, though?
Keep in mind this isn't an all-or-nothing matter of "From now on, D will never evolve and never correct past mistakes". It's just a matter of "What's the right thing for D at this point in time?" Right now, the answer is "mature and stabilize". *After* we've gotten there, that's when we'll face a choice of "Ok, so what now?" Maybe the answer will be "Yes, at this point we have the resources, userbase, stability, etc such that we can manage doing a branch to break out of the x, y and z corners we've painted ourselves into." Or maybe it'll be "Problems x, y and z have either become mitigated because of a, b and c, or we now have previously-unseen ways to deal with them non-destructively." Right now: mature and stabilize. *Then* worry about where to go from there, breaking vs stagnating.
 Eventually, new programming languages will appear which will learn from
 D's mistakes, and no new projects will be written in D.
Sure. That's inevitable anyway. The trick is for D to prosper enough for D-next to be that new language, instead of something unrelated.
Mar 13 2014
prev sibling next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 3/12/2014 5:18 PM, Andrei Alexandrescu wrote:
 We are opposed to having compiler flags define language semantics.
Yeah, that's one of those things that always seems like a reasonable idea, but experience with it isn't happy.
Mar 12 2014
parent reply "Chris Williams" <yoreanon-chrisw yahoo.co.jp> writes:
On Thursday, 13 March 2014 at 00:48:15 UTC, Walter Bright wrote:
 On 3/12/2014 5:18 PM, Andrei Alexandrescu wrote:
 We are opposed to having compiler flags define language 
 semantics.
Yeah, that's one of those things that always seems like a reasonable idea, but experience with it isn't happy.
I would imagine that the reasons for this goal are 1) to keep the compiler and language sane, and 2) insufficient personel to maintain legacy variants. nor frequently. your "clients" pay you to perform specific tasks, legacy compilation features will end up being maintained either by random people who fix it themselves, or a client who based his code on an older version pays you to go into the legacy branch/build target code. This is the way most open sourced software works. Linux, GCC, emacs, etc. are all constantly moving targets that only through people paying Red Hat and others like them to make the insanity go away are able to work together as a single whole.
Mar 12 2014
next sibling parent Walter Bright <newshound2 digitalmars.com> writes:
On 3/12/2014 6:18 PM, Chris Williams wrote:
 On Thursday, 13 March 2014 at 00:48:15 UTC, Walter Bright wrote:
 On 3/12/2014 5:18 PM, Andrei Alexandrescu wrote:
 We are opposed to having compiler flags define language semantics.
Yeah, that's one of those things that always seems like a reasonable idea, but experience with it isn't happy.
I would imagine that the reasons for this goal are 1) to keep the compiler and language sane, and 2) insufficient personel to maintain legacy variants.
Maybe surprisingly, it isn't either. It's because every one of those switches splits the language into two languages. 8 switches means 256 languages. When you're a library vendor, which of those 256 it, and frustrating for those who do. The combinatorics alone are daunting - 256 times through the test suite. You have to add introspection so the program can tell which language it is today. The punishment goes on and on.
Mar 12 2014
prev sibling parent reply "Sarath Kodali" <sarath dummy.com> writes:
On Thursday, 13 March 2014 at 01:18:14 UTC, Chris Williams wrote:
 On Thursday, 13 March 2014 at 00:48:15 UTC, Walter Bright wrote:
 On 3/12/2014 5:18 PM, Andrei Alexandrescu wrote:
 We are opposed to having compiler flags define language 
 semantics.
Yeah, that's one of those things that always seems like a reasonable idea, but experience with it isn't happy.
I would imagine that the reasons for this goal are 1) to keep the compiler and language sane, and 2) insufficient personel to maintain legacy variants. lightly nor frequently. your "clients" pay you to perform specific tasks, legacy compilation features will end up being maintained either by random people who fix it themselves, or a client who based his code on an older version pays you to go into the legacy branch/build target code. This is the way most open sourced software works. Linux, GCC, emacs, etc. are all constantly moving targets that only through people paying Red Hat and others like them to make the insanity go away are able to work together as a single whole.
If I'm a enterprise customer I would be very angry if my code breaks with each new release of compiler. I will be angry irrespective of whether I'm paying for the compiler or not. Because every time my code breaks, I will have to allocate resources to figure out the reason why a working production code is broken and then have to test new code and testing can take months to complete. Languages are adopted by enterprises only when there is long term stability to it. C code written 30 years back in K&R style still compiles without any problem. Please enhance the language but don't break existing code. Also if something has to be deprecated, it should exist in that deprecated state for at least for 5 years. Currently it is one year and for enterprise customers that is a very short period. - Sarath
Mar 12 2014
parent reply Manu <turkeyman gmail.com> writes:
On 13 March 2014 14:37, Sarath Kodali <sarath dummy.com> wrote:

 On Thursday, 13 March 2014 at 01:18:14 UTC, Chris Williams wrote:

 On Thursday, 13 March 2014 at 00:48:15 UTC, Walter Bright wrote:

 On 3/12/2014 5:18 PM, Andrei Alexandrescu wrote:

 We are opposed to having compiler flags define language semantics.
Yeah, that's one of those things that always seems like a reasonable idea, but experience with it isn't happy.
I would imagine that the reasons for this goal are 1) to keep the compiler and language sane, and 2) insufficient personel to maintain legacy variants. frequently. "clients" pay you to perform specific tasks, legacy compilation features will end up being maintained either by random people who fix it themselves, or a client who based his code on an older version pays you to go into the legacy branch/build target code. This is the way most open sourced software works. Linux, GCC, emacs, etc. are all constantly moving targets that only through people paying Red Hat and others like them to make the insanity go away are able to work together as a single whole.
If I'm a enterprise customer I would be very angry if my code breaks with each new release of compiler. I will be angry irrespective of whether I'm paying for the compiler or not. Because every time my code breaks, I will have to allocate resources to figure out the reason why a working production code is broken and then have to test new code and testing can take months to complete.
That's not the way business works, at least, not in my neck of the woods. Having been responsible for rolling out many compiler/toolset/library upgrades personally, that's simply not how it's done. It is assumed that infrastructural update may cause disruption. No sane business just goes and rolls out updates without an initial testing and adaptation period. Time is allocated to the upgrade process, and necessary changes to workflow are made by an expert that performs the upgrade. In the case of controlled language feature deprecation (as opposed to the std.json example), it should ideally be safe to assume an alternative recommendation is in place, and it was designed to minimise disruption. In the case we are discussing here, the disruption is small and easily addressed. Languages are adopted by enterprises only when there is long term stability
 to it. C code written 30 years back in K&R style still compiles without any
 problem. Please enhance the language but don't break existing code.
In my experience, C/C++ is wildly unstable. I've been responsible for managing C compiler updates on many occasions, and they often cause complete catastrophe, with no warning or deprecation path given! Microsoft are notorious for this. Basically every version of MSC is incompatible with the version prior in some annoying way. I personally feel D has a major advantage here since all 3 compilers share the same front-end, and has a proper 'deprecated' concept (only recently introduced to C), and better compile time detection and warning opportunities. Frankly, I think all this complaining about breaking changes in D is massively overrated. C is much, much worse! The only difference is that D releases are more frequent than C releases. That will change as the language matures. Also if something has to be deprecated, it should exist in that deprecated
 state for at least for 5 years. Currently it is one year and for enterprise
 customers that is a very short period.
This is possibly true. It's a tricky balancing act. I'd rather see D take a more strict approach here, so that we don't end up in the position where 30-year-old D code still exists alongside 'modern' D code written completely differently, requiring to be compiled with a bunch of different options. The old codebases should be nudged to update along the way. I would consider it a big mistake to retain the ancient-C backwards compatibility.
Mar 12 2014
parent "Sarath Kodali" <sarath dummy.com> writes:
On Thursday, 13 March 2014 at 05:13:00 UTC, Manu wrote:
 That's not the way business works, at least, not in my neck of 
 the woods.
 Having been responsible for rolling out many 
 compiler/toolset/library
 upgrades personally, that's simply not how it's done.
That may be how gamedev industry works, because you are always on cutting edge technology. But for big companies in traditional industry (like IT or Oil & Gas or Finance or Aviation), they always work on stable tool set. Even undocumented features cannot change in an adhoc manner. I have worked for some of the top IT companies and I did not use D for my projects primarily because it was not stable.
 It is assumed that infrastructural update may cause disruption. 
 No sane
 business just goes and rolls out updates without an initial 
 testing and
 adaptation period.
 Time is allocated to the upgrade process, and necessary changes 
 to workflow
 are made by an expert that performs the upgrade.
That happens for D1 to D2 migration and not from D2.64 to D2.65. Every few months you cannot expect the team to test and validate the compiler tool set.
 In the case of controlled language feature deprecation (as 
 opposed to the
 std.json example), it should ideally be safe to assume an 
 alternative
 recommendation is in place, and it was designed to minimise 
 disruption.
 In the case we are discussing here, the disruption is small and 
 easily
 addressed.

 Languages are adopted by enterprises only when there is long 
 term stability
 to it. C code written 30 years back in K&R style still 
 compiles without any
 problem. Please enhance the language but don't break existing 
 code.
In my experience, C/C++ is wildly unstable. I've been responsible for managing C compiler updates on many occasions, and they often cause complete catastrophe, with no warning or deprecation path given! Microsoft are notorious for this. Basically every version of MSC is incompatible with the version prior in some annoying way. I personally feel D has a major advantage here since all 3 compilers share the same front-end, and has a proper 'deprecated' concept (only recently introduced to C), and better compile time detection and warning opportunities. Frankly, I think all this complaining about breaking changes in D is massively overrated. C is much, much worse! The only difference is that D releases are more frequent than C releases. That will change as the language matures. Also if something has to be deprecated, it should exist in that deprecated
 state for at least for 5 years. Currently it is one year and 
 for enterprise
 customers that is a very short period.
This is possibly true. It's a tricky balancing act. I'd rather see D take a more strict approach here, so that we don't end up in the position where 30-year-old D code still exists alongside 'modern' D code written completely differently, requiring to be compiled with a bunch of different options. The old codebases should be nudged to update along the way. I would consider it a big mistake to retain the ancient-C backwards compatibility.
Even I do not want D to have 30 years of backward compatibility. But that is something the C community has got used to. That is why I said we need atleast 5 year depreciation cycle. Anyways, coming to your original issue - why can't you add "final:" in your class as suggested by Walter? Doesn't this solve your problem without changing the default behaviour? - Sarath
Mar 12 2014
prev sibling parent "Jacob Carlborg" <doob me.com> writes:
On Thursday, 13 March 2014 at 00:18:06 UTC, Andrei Alexandrescu 
wrote:

 The saying goes, "you can't make a bucket of yogurt without a 
 spoonful of rennet". The pattern of resetting customer code 
 into the next version must end. It's the one thing that both 
 current and future users want: a pattern of stability and 
 reliability.
One thing we can do is to have a better version scheme. Currently (breaking) language changes, (breaking) library changes and bug fixes all occur in the same release. There's no way to tell on the version number if a specific release was only a bug fix release (if we would do that). -- /Jacob Carlborg
Mar 13 2014
prev sibling parent "deadalnix" <deadalnix gmail.com> writes:
On Thursday, 13 March 2014 at 00:02:13 UTC, Chris Williams wrote:
 On Wednesday, 12 March 2014 at 22:50:00 UTC, Walter Bright 
 wrote:
 But we nearly lost a major client over it.

 We're past the point where we can break everyone's code.
As someone who would like to be able to use D as a language, professionally, it's more important to me that D gain future clients than that it maintains the ones that it has. Even more important is that it does both of those things.
You don't gain clients by loosing clients.
Mar 12 2014
prev sibling next sibling parent "Mike" <none none.com> writes:
On Wednesday, 12 March 2014 at 22:50:00 UTC, Walter Bright wrote:

 We're past the point where we can break everyone's code.
I suspect changes to the default behaviour of the language can be made if they are managed well. It just needs to be done gradually. Not making improvements to the language sacrifices future clients at the expense of existing clients. Release x: Add a -final_by_default switch. Those that want 'final by default' can start building their code with that feature. Default is still not 'not final by default'. Warn with release that default behaviour will change in y+z releases. Release x+y: Add a -no_final_by_default flag and encourage users who don't want 'final by default' to use that flag in preparation for the next release. Default is still 'not final by default'. Repeat warning that default will change with next release. Release x+y+z: Make the switch to *final by default*. Maintain the -no_final_by_default flag as long as you deem necessary. In Addition: Make binary and source code for previous versions *easily* accessible. Procrastinators can always revert to a known state. My 2 cents.
Mar 12 2014
prev sibling next sibling parent reply "Kapps" <opantm2+spam gmail.com> writes:
On Wednesday, 12 March 2014 at 22:50:00 UTC, Walter Bright wrote:
 The argument for final by default, as eloquently expressed by 
 Manu, is a good one. Even Andrei agrees with it (!).

 The trouble, however, was illuminated most recently by the 
 std.json regression that broke existing code. The breakage 
 wasn't even intentional; it was a mistake. The user fix was 
 also simple, just a tweak here and there to user code, and the 
 compiler pointed out where each change needed to be made.

 But we nearly lost a major client over it.
Was this because of a breaking change itself or because of the lack of warning and nature of the change? The final by default change should not be something that catches anyone by surprise. There would be lots of time to prepare for it, warnings would be made, and an entire deprecation process gone through. It would not be a random compiler change that breaks code by surprise for no end-user benefit. When warnings start occurring, the compiler can quite clearly tell you *exactly* what you need to change to make your code up-to-date. And in the end, there is a net benefit to the user in the form of less error-prone, faster, code. I used to get frustrated when my code would randomly break every compiler update (and it shows how much D has progressed that regressions in my own code are now a rare occurrence), but unexpected regressions such as the std.json regression are much different from intended changes with plenty of time and warning that provide an overall (even if slight in many cases) benefit to the end-user.
Mar 12 2014
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 3/12/2014 6:30 PM, Kapps wrote:
 I used to get frustrated when my code would randomly break every compiler
update
 (and it shows how much D has progressed that regressions in my own code are now
 a rare occurrence), but unexpected regressions such as the std.json regression
 are much different from intended changes with plenty of time and warning that
 provide an overall (even if slight in many cases) benefit to the end-user.
I got caught by breaking changes myself. I even approved the changes. But they unexpectedly broke projects of mine, and I had to go through updating & fixing them, supplying updates, etc. It sux. And it's much, much, much worse if you've got lots of legacy code with only a vague idea of how it works because the engineers who wrote it have moved on, etc.
Mar 12 2014
parent reply "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Wed, Mar 12, 2014 at 08:01:39PM -0700, Walter Bright wrote:
 On 3/12/2014 6:30 PM, Kapps wrote:
I used to get frustrated when my code would randomly break every
compiler update (and it shows how much D has progressed that
regressions in my own code are now a rare occurrence), but unexpected
regressions such as the std.json regression are much different from
intended changes with plenty of time and warning that provide an
overall (even if slight in many cases) benefit to the end-user.
I got caught by breaking changes myself. I even approved the changes. But they unexpectedly broke projects of mine, and I had to go through updating & fixing them, supplying updates, etc. It sux. And it's much, much, much worse if you've got lots of legacy code with only a vague idea of how it works because the engineers who wrote it have moved on, etc.
Or you wrote that code but it has been so long ago that you don't remember the fine details of it to be able to judge what is the correct way to fix it. This doubly sux when the code is for a workhorse program that you're actually *using* on a daily basis, which has been working just fine for the last 2 years, and now it suddenly doesn't compile / doesn't work anymore, and you need it to get something done and don't have time to sit down and figure out why it broke (or how to fix it). T -- I am a consultant. My job is to make your job redundant. -- Mr Tom
Mar 12 2014
parent reply "Joseph Cassman" <jc7919 outlook.com> writes:
On Thursday, 13 March 2014 at 04:57:49 UTC, H. S. Teoh wrote:
 On Wed, Mar 12, 2014 at 08:01:39PM -0700, Walter Bright wrote:
 On 3/12/2014 6:30 PM, Kapps wrote:
I used to get frustrated when my code would randomly break 
every
compiler update (and it shows how much D has progressed that
regressions in my own code are now a rare occurrence), but 
unexpected
regressions such as the std.json regression are much 
different from
intended changes with plenty of time and warning that provide 
an
overall (even if slight in many cases) benefit to the 
end-user.
I got caught by breaking changes myself. I even approved the changes. But they unexpectedly broke projects of mine, and I had to go through updating & fixing them, supplying updates, etc. It sux. And it's much, much, much worse if you've got lots of legacy code with only a vague idea of how it works because the engineers who wrote it have moved on, etc.
Or you wrote that code but it has been so long ago that you don't remember the fine details of it to be able to judge what is the correct way to fix it. This doubly sux when the code is for a workhorse program that you're actually *using* on a daily basis, which has been working just fine for the last 2 years, and now it suddenly doesn't compile / doesn't work anymore, and you need it to get something done and don't have time to sit down and figure out why it broke (or how to fix it). T
Here here! Or even the tooling and environment needed to get it to work are a thing of the past. Starting to remember some long hours working with old versions of MS Access on old Windows installations and trying to get them working on newer versions. Arg! Joseph
Mar 12 2014
parent reply Manu <turkeyman gmail.com> writes:
On 13 March 2014 15:14, Joseph Cassman <jc7919 outlook.com> wrote:

 On Thursday, 13 March 2014 at 04:57:49 UTC, H. S. Teoh wrote:

 On Wed, Mar 12, 2014 at 08:01:39PM -0700, Walter Bright wrote:

 On 3/12/2014 6:30 PM, Kapps wrote:
I used to get frustrated when my code would randomly break >every
compiler update (and it shows how much D has progressed that
regressions in my own code are now a rare occurrence), but >unexpected
regressions such as the std.json regression are much >different from
intended changes with plenty of time and warning that provide >an
overall (even if slight in many cases) benefit to the >end-user.
I got caught by breaking changes myself. I even approved the changes. But they unexpectedly broke projects of mine, and I had to go through updating & fixing them, supplying updates, etc. It sux. And it's much, much, much worse if you've got lots of legacy code with only a vague idea of how it works because the engineers who wrote it have moved on, etc.
Or you wrote that code but it has been so long ago that you don't remember the fine details of it to be able to judge what is the correct way to fix it. This doubly sux when the code is for a workhorse program that you're actually *using* on a daily basis, which has been working just fine for the last 2 years, and now it suddenly doesn't compile / doesn't work anymore, and you need it to get something done and don't have time to sit down and figure out why it broke (or how to fix it). T
Here here! Or even the tooling and environment needed to get it to work are a thing of the past. Starting to remember some long hours working with old versions of MS Access on old Windows installations and trying to get them working on newer versions. Arg! Joseph
Again, this is conflating random breakage with controlled deprecation. A clear message with a file:line that says "virtual-by-default is deprecated, add 'virtual' _right here_." is not comparable to the behaviour of byLine() silently changing from release to release and creating some bugs, or std.json breaking unexpectedly with no warning.
Mar 12 2014
next sibling parent "Sarath Kodali" <sarath dummy.com> writes:
On Thursday, 13 March 2014 at 05:31:17 UTC, Manu wrote:
 Again, this is conflating random breakage with controlled 
 deprecation.
 A clear message with a file:line that says "virtual-by-default 
 is
 deprecated, add 'virtual' _right here_." is not comparable to 
 the behaviour
 of byLine() silently changing from release to release and 
 creating some
 bugs, or std.json breaking unexpectedly with no warning.
Hundreds or thousands of "deprecated" warning messages for every D project vs adding "final:" for few classes in few projects. Which is better? - Sarath
Mar 12 2014
prev sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 3/12/2014 10:30 PM, Manu wrote:
 Again, this is conflating random breakage with controlled deprecation.
 A clear message with a file:line that says "virtual-by-default is deprecated,
 add 'virtual' _right here_." is not comparable to the behaviour of byLine()
 silently changing from release to release and creating some bugs, or std.json
 breaking unexpectedly with no warning.
In the case I experienced, I got a clear compiler message. I knew what to fix. It still sucked. It was "fixing" non-buggy code that was intentionally written that way. I don't have as much of a problem breaking things that were already broken and happened to work out of luck, but that isn't the case here.
Mar 12 2014
parent reply Xavier Bigand <flamaros.xavier gmail.com> writes:
Le 13/03/2014 07:47, Walter Bright a écrit :
 On 3/12/2014 10:30 PM, Manu wrote:
 Again, this is conflating random breakage with controlled deprecation.
 A clear message with a file:line that says "virtual-by-default is
 deprecated,
 add 'virtual' _right here_." is not comparable to the behaviour of
 byLine()
 silently changing from release to release and creating some bugs, or
 std.json
 breaking unexpectedly with no warning.
In the case I experienced, I got a clear compiler message. I knew what to fix. It still sucked. It was "fixing" non-buggy code that was intentionally written that way. I don't have as much of a problem breaking things that were already broken and happened to work out of luck, but that isn't the case here.
Maybe if changes are really simple a tool can help? Tools are needed for such kind of refactoring can be done automatically. For some old games I wrote a script converter cause we completely broke the syntax (from XML to Lua) with a version of our engine, we do that because news features of engine was requested for next games updates. It can appear like a time wasting, but it was fun to do.
Mar 13 2014
next sibling parent "Kapps" <opantm2+spam gmail.com> writes:
On Thursday, 13 March 2014 at 17:08:05 UTC, Xavier Bigand wrote:
 Le 13/03/2014 07:47, Walter Bright a écrit :

 Maybe if changes are really simple a tool can help? Tools are 
 needed for such kind of refactoring can be done automatically.

 For some old games I wrote a script converter cause we 
 completely broke the syntax (from XML to Lua) with a version of 
 our engine, we do that because news features of engine was 
 requested for next games updates.
 It can appear like a time wasting, but it was fun to do.
It's difficult to have such a tool in D that can safely handle all situations (mixins in particular make it particularly difficult). Though something probably be made that can handle the vast majority of situations after being confirmed by the user that the changes are acceptable.
Mar 13 2014
prev sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 3/13/2014 10:07 AM, Xavier Bigand wrote:
 Maybe if changes are really simple a tool can help? Tools are needed for such
 kind of refactoring can be done automatically.
Source code "fixers" give me the heebie-jeebies. Besides, this particular fix needed some judgement.
Mar 13 2014
parent Manu <turkeyman gmail.com> writes:
On 14 March 2014 07:55, Walter Bright <newshound2 digitalmars.com> wrote:

 On 3/13/2014 10:07 AM, Xavier Bigand wrote:

 Maybe if changes are really simple a tool can help? Tools are needed for
 such
 kind of refactoring can be done automatically.
Source code "fixers" give me the heebie-jeebies. Besides, this particular fix needed some judgement.
I think it depends how it's done. Once upon a time, a tool that just made changes to a bunch of files, I would agree, is horrible. These days you have version control, and it would also be easy for the tool to pop up a merge window to visualise the changes being made. Theoretically, such a tool could produce a patch rather than modify your code directly, then you can merge it with your tool of choice, and visualise what it did in the process. In fact, that's how to add this feature to DMD that people often ask for; run the compiler with a special flag that does "upgrade v2.0xx -> 2.0yy", and have it produce a patch file rather than modify your source. That's gotta be fairly unobjectionable. I'd say that's an awesome feature, whereas, like you, I would strongly oppose any feature that directly updates your code for you.
Mar 13 2014
prev sibling next sibling parent "deadalnix" <deadalnix gmail.com> writes:
On Wednesday, 12 March 2014 at 22:50:00 UTC, Walter Bright wrote:
 The argument for final by default, as eloquently expressed by 
 Manu, is a good one. Even Andrei agrees with it (!).

 The trouble, however, was illuminated most recently by the 
 std.json regression that broke existing code. The breakage 
 wasn't even intentional; it was a mistake. The user fix was 
 also simple, just a tweak here and there to user code, and the 
 compiler pointed out where each change needed to be made.

 But we nearly lost a major client over it.

 We're past the point where we can break everyone's code. It's 
 going to cost us far, far more than we'll gain. (And you all 
 know that if we could do massive do-overs, I'd get rid of put's 
 auto-decode.)

 Instead, one can write:

    class C { final: ... }

 as a pattern, and everything in the class will be final. That 
 leaves the "but what if I want a single virtual function?" 
 There needs to be a way to locally turn off 'final'. Adding 
 'virtual' is one way to do that, but:

 1. there are other attributes we might wish to turn off, like 
 'pure' and 'nothrow'.

 2. it seems excessive to dedicate a keyword just for that.

 So, there's the solution that has been proposed before:

    !final
    !pure
    !nothrow
    etc.
Yes please.
Mar 12 2014
prev sibling next sibling parent reply "Sean Kelly" <sean invisibleduck.org> writes:
On Wednesday, 12 March 2014 at 22:50:00 UTC, Walter Bright wrote:
 The argument for final by default, as eloquently expressed by 
 Manu, is a good one. Even Andrei agrees with it (!).

 The trouble, however, was illuminated most recently by the 
 std.json regression that broke existing code. The breakage 
 wasn't even intentional; it was a mistake. The user fix was 
 also simple, just a tweak here and there to user code, and the 
 compiler pointed out where each change needed to be made.

 But we nearly lost a major client over it.
I find this a bit baffling. Given the investment this customer must have in D, I can't imagine them switching to a new language over something like this. I hate to say it, but this sounds like the instances you hear of when people call up customer service just to have someone to yell at. Not that the code breakage is okay, but I do feel like this may be somewhat of an exaggeration. Regarding this virtual by default issue. I entirely support Manu's argument and wholeheartedly agree with it. I even think that I'd be more likely to use D professionally if D worked this way, for many of the same reasons Manu has expressed. There may even be a window for doing this, but the communication around the change would have to be perfect. Regarding user retention... I've spent the past N months beginning the process of selling D at work. The language and library are at a point of maturity where I think it might have a chance when evaluated simply on the merits of the language itself. However, what has me really hesitant to put my shoulder behind D and really push isn't that changes occur sometimes. Even big changes. It's how they're handled. Issues come up in the newsgroup and are discussed back and forth for ages. Seriously considered. And then maybe a decision is apparently reached (as with this virtual by default thing) and so I expect that action will be taken. And then nothing happens. And other times big changes occur with seemingly little warning. Personally, I don't really require perfect compatibility between released, but I do want to see things moving decisively in a clearly communicated direction. I want to know where we're going and how we're going to get there, and if that means that I have to hold on moving to a new compiler release for a while while I sort out changes that's fine. But I want to be able to prepare for it. As things stand, I'm worried that if I got a team to move to D we'd have stuff breaking unexpectedly and I'd end up feeling like an ass for recommending it. I guess that's probably what prompted the "almost lost a major client" issue you mentioned above. This JSON parser change was more the proverbial straw than a major issue in itself. As for the !virtual idea... I hate it. Please don't add yet more ways for people to make their code confusing.
Mar 12 2014
next sibling parent "Kapps" <opantm2+spam gmail.com> writes:
On Thursday, 13 March 2014 at 05:15:58 UTC, Sean Kelly wrote:
 Regarding user retention... I've spent the past N months 
 beginning the process of selling D at work.  The language and 
 library are at a point of maturity where I think it might have 
 a chance when evaluated simply on the merits of the language 
 itself.  However, what has me really hesitant to put my 
 shoulder behind D and really push isn't that changes occur 
 sometimes.  Even big changes.  It's how they're handled.  
 Issues come up in the newsgroup and are discussed back and 
 forth for ages.  Seriously considered.  And then maybe a 
 decision is apparently reached (as with this virtual by default 
 thing) and so I expect that action will be taken.  And then 
 nothing happens.  And other times big changes occur with 
 seemingly little warning.  Personally, I don't really require 
 perfect compatibility between released, but I do want to see 
 things moving decisively in a clearly communicated direction.  
 I want to know where we're going and how we're going to get 
 there, and if that means that I have to hold on moving to a new 
 compiler release for a while while I sort out changes that's 
 fine.  But I want to be able to prepare for it.
I agree with this, in particular issues where a conclusion is reached that are then ignored for months or years. This is something that the preapproved Bugzilla tag should be able to help with, but I'm not sure how much it's being used in situations like this. In this example, Walter mentioned that he was essentially convinced to the change, then the thread faded off and the issue was ignored. This was over 9 months ago. There was no mention of whether this is something that would be done (or accepted if the community implements the change), it just reached a conclusion that made it seem like the change was likely and then it got set aside like many similar issues. In this case, eventually a pull request was made to begin the change process, which then sat there for over three months until it was merged, at which point Andrei mentioned his disapproval / rejection. Had this pull not gotten merged (whether it goes into an actual release or not), it would be just another topic that's discussed, seems like it's accepted, then is completely ignored.
Mar 13 2014
prev sibling next sibling parent Andrej Mitrovic <andrej.mitrovich gmail.com> writes:
On 3/13/14, Sean Kelly <sean invisibleduck.org> wrote:
 As for the !virtual idea... I hate it.  Please don't add yet more
 ways for people to make their code confusing.
Also this needlessly invents new language for a language that's supposed to be familiar with C/C++ programmers. The "virtual" keyword has been known for decades, "!final" is the special-case which has to be thought. It also looks ugly and could easily be misread.
Mar 13 2014
prev sibling parent reply "Don" <x nospam.com> writes:
On Thursday, 13 March 2014 at 05:15:58 UTC, Sean Kelly wrote:
 On Wednesday, 12 March 2014 at 22:50:00 UTC, Walter Bright 
 wrote:
 The argument for final by default, as eloquently expressed by 
 Manu, is a good one. Even Andrei agrees with it (!).

 The trouble, however, was illuminated most recently by the 
 std.json regression that broke existing code. The breakage 
 wasn't even intentional; it was a mistake. The user fix was 
 also simple, just a tweak here and there to user code, and the 
 compiler pointed out where each change needed to be made.

 But we nearly lost a major client over it.
I find this a bit baffling. Given the investment this customer must have in D, I can't imagine them switching to a new language over something like this. I hate to say it, but this sounds like the instances you hear of when people call up customer service just to have someone to yell at. Not that the code breakage is okay, but I do feel like this may be somewhat of an exaggeration.
And std.json is among the worst code I've ever seen. I'm a bit shocked that anyone would be using it in production code.
 Regarding this virtual by default issue.  I entirely support 
 Manu's argument and wholeheartedly agree with it.  I even think 
 that I'd be more likely to use D professionally if D worked 
 this way, for many of the same reasons Manu has expressed.  
 There may even be a window for doing this, but the 
 communication around the change would have to be perfect.

 Regarding user retention... I've spent the past N months 
 beginning the process of selling D at work.  The language and 
 library are at a point of maturity where I think it might have 
 a chance when evaluated simply on the merits of the language 
 itself.  However, what has me really hesitant to put my 
 shoulder behind D and really push isn't that changes occur 
 sometimes.  Even big changes.  It's how they're handled.  
 Issues come up in the newsgroup and are discussed back and 
 forth for ages.  Seriously considered.  And then maybe a 
 decision is apparently reached (as with this virtual by default 
 thing) and so I expect that action will be taken.  And then 
 nothing happens.  And other times big changes occur with 
 seemingly little warning.  Personally, I don't really require 
 perfect compatibility between released, but I do want to see 
 things moving decisively in a clearly communicated direction.  
 I want to know where we're going and how we're going to get 
 there, and if that means that I have to hold on moving to a new 
 compiler release for a while while I sort out changes that's 
 fine.  But I want to be able to prepare for it.  As things 
 stand, I'm worried that if I got a team to move to D we'd have 
 stuff breaking unexpectedly and I'd end up feeling like an ass 
 for recommending it.  I guess that's probably what prompted the 
 "almost lost a major client" issue you mentioned above.  This 
 JSON parser change was more the proverbial straw than a major 
 issue in itself.
I agree completely. Some things that really should be fixed, don't get fixed because of a paranoid fear of breaking code. And this tends to happen with the issues that can give nice warning messages and are easy to fix... Yet there are still enough bugs that your code breaks every release anyway. We need to lose the fantasy that there is legacy code which still compiles. Anything more than a year or so old is broken already.
 As for the !virtual idea... I hate it.  Please don't add yet 
 more ways for people to make their code confusing.
Mar 13 2014
next sibling parent Daniel =?ISO-8859-1?Q?Koz=E1k?= <kozzi11 gmail.com> writes:
Don píše v Čt 13. 03. 2014 v 09:49 +0000:
 Yet there are still enough bugs that your code breaks every 
 release anyway.
 We need to lose the fantasy that there is legacy code which still 
 compiles.
+1000
 Anything more than a year or so old is broken already.
Even recent code does not compile with latest dmd. With almost each dmd release I must rewrite plenty of my D code base.
 As for the !virtual idea... I hate it.  Please don't add yet 
 more ways for people to make their code confusing.
+1
Mar 13 2014
prev sibling next sibling parent reply "Daniel Murphy" <yebbliesnospam gmail.com> writes:
"Don"  wrote in message news:ekymfpqyxasvelcixrjp forum.dlang.org...

 I agree completely.

 Some things that really should be fixed, don't get fixed because of a 
 paranoid fear of breaking code. And this tends to happen with the issues 
 that can give nice warning messages and are easy to fix...

 Yet there are still enough bugs that your code breaks every release 
 anyway.
 We need to lose the fantasy that there is legacy code which still 
 compiles.
 Anything more than a year or so old is broken already.
As usual I agree with every single thing in this post, and Sean's. Regressions are bad but have nothing to do with using slow, controlled deprecation to make the language better.
Mar 13 2014
next sibling parent reply "Brian Rogoff" <brogoff gmail.com> writes:
On Thursday, 13 March 2014 at 12:01:15 UTC, Daniel Murphy wrote:
 "Don"  wrote in message 
 news:ekymfpqyxasvelcixrjp forum.dlang.org...

 I agree completely.

 Some things that really should be fixed, don't get fixed 
 because of a paranoid fear of breaking code. And this tends to 
 happen with the issues that can give nice warning messages and 
 are easy to fix...
As usual I agree with every single thing in this post, and Sean's. Regressions are bad but have nothing to do with using slow, controlled deprecation to make the language better.
It might be worthwhile to consider a compiler switch which would require forced virtual/final annotations on all methods as per https://d.puremagic.com/issues/show_bug.cgi?id=11616#c4 While code compiled with such a switch would be a bit more verbose, it would ensure that programmers were careful with virtual. It was a good transition plan, and the idea has value even if the final by default is delayed or abandoned. I'd like to see final by default in D rather than it's successor. It seems quite a bit easier to fix than some other unfortunate default choices.
Mar 13 2014
parent reply "Daniel Murphy" <yebbliesnospam gmail.com> writes:
"Brian Rogoff"  wrote in message 
news:hvdmgktbuhmemwqmbeqn forum.dlang.org...

 It might be worthwhile to consider a compiler switch which would require 
 forced virtual/final annotations on all methods as per

 https://d.puremagic.com/issues/show_bug.cgi?id=11616#c4

 While code compiled with such a switch would be a bit more verbose, it 
 would ensure that programmers were careful with virtual. It was a good 
 transition plan, and the idea has value even if the final by default is 
 delayed or abandoned.
A compiler switch is not the place for this, but it would make a fine lint rule.
Mar 13 2014
parent "Adam D. Ruppe" <destructionator gmail.com> writes:
On Thursday, 13 March 2014 at 15:31:25 UTC, Daniel Murphy wrote:
 A compiler switch is not the place for this, but it would make 
 a fine lint rule.
It can also be done in D today using UDAs and compile time reflection, including project-wide if you want to use RTInfo.
Mar 13 2014
prev sibling parent Jerry <jlquinn optonline.net> writes:
"Daniel Murphy" <yebbliesnospam gmail.com> writes:

 "Don"  wrote in message news:ekymfpqyxasvelcixrjp forum.dlang.org...

 I agree completely.

 Some things that really should be fixed, don't get fixed because of a
 paranoid fear of breaking code. And this tends to happen with the issues
 that can give nice warning messages and are easy to fix...

 Yet there are still enough bugs that your code breaks every release anyway.
 We need to lose the fantasy that there is legacy code which still compiles.
 Anything more than a year or so old is broken already.
As usual I agree with every single thing in this post, and Sean's. Regressions are bad but have nothing to do with using slow, controlled deprecation to make the language better.
I add my +1 to this side of the argument. Performance matters. D is of interest to me as a language that is clean but performs well out of the box. The harder it is to make it perform well, the more we might as well continue to use C++. I've fixed multiple preformance problems involving virtual functions being used where they don't belong. I want to see final by default for performance. I also want to see final by default as I work in an area where many APIs are created by people who are trying things out, rather than designing software for the long term. Invariably, some of these APIs will linger and be very hard to fix down the road. I'd *much* rather these maintenance headaches not be virtual. I also second the point that a controlled deprecation cycle is vastly different from accidental breakage. It can be planned for as long as the communication happens. It might be very useful to have a roadmap page that lists every deprecation cycle that is planned or ongoing. Jerry
Mar 13 2014
prev sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 3/13/14, 2:49 AM, Don wrote:
 On Thursday, 13 March 2014 at 05:15:58 UTC, Sean Kelly wrote:
 I find this a bit baffling.  Given the investment this customer must
 have in D, I can't imagine them switching to a new language over
 something like this.  I hate to say it, but this sounds like the
 instances you hear of when people call up customer service just to
 have someone to yell at.  Not that the code breakage is okay, but I do
 feel like this may be somewhat of an exaggeration.
And std.json is among the worst code I've ever seen. I'm a bit shocked that anyone would be using it in production code.
We're using it.
 Regarding user retention... I've spent the past N months beginning the
 process of selling D at work.  The language and library are at a point
 of maturity where I think it might have a chance when evaluated simply
 on the merits of the language itself.  However, what has me really
 hesitant to put my shoulder behind D and really push isn't that
 changes occur sometimes.  Even big changes.  It's how they're handled.
 Issues come up in the newsgroup and are discussed back and forth for
 ages.  Seriously considered.  And then maybe a decision is apparently
 reached (as with this virtual by default thing) and so I expect that
 action will be taken.  And then nothing happens.  And other times big
 changes occur with seemingly little warning.  Personally, I don't
 really require perfect compatibility between released, but I do want
 to see things moving decisively in a clearly communicated direction. I
 want to know where we're going and how we're going to get there, and
 if that means that I have to hold on moving to a new compiler release
 for a while while I sort out changes that's fine.  But I want to be
 able to prepare for it.  As things stand, I'm worried that if I got a
 team to move to D we'd have stuff breaking unexpectedly and I'd end up
 feeling like an ass for recommending it.  I guess that's probably what
 prompted the "almost lost a major client" issue you mentioned above.
 This JSON parser change was more the proverbial straw than a major
 issue in itself.
I agree completely. Some things that really should be fixed, don't get fixed because of a paranoid fear of breaking code. And this tends to happen with the issues that can give nice warning messages and are easy to fix... Yet there are still enough bugs that your code breaks every release anyway. We need to lose the fantasy that there is legacy code which still compiles. Anything more than a year or so old is broken already.
Backward compatibility is more like a spectrum than a threshold. Not having it now is not an argument to cease pursuing it. Andrei
Mar 13 2014
parent "Don" <x nospam.com> writes:
 Some things that really should be fixed, don't get fixed 
 because of a
 paranoid fear of breaking code. And this tends to happen with 
 the issues
 that can give nice warning messages and are easy to fix...

 Yet there are still enough bugs that your code breaks every 
 release anyway.
 We need to lose the fantasy that there is legacy code which 
 still compiles.
 Anything more than a year or so old is broken already.
Backward compatibility is more like a spectrum than a threshold. Not having it now is not an argument to cease pursuing it.
Exactly, it's a spectrum. But at any given time, the whole language and library are not at a single point on the spectrum. Some things are frozen, others in practice are not. And the problem is that D has historically acted as if everything was the same, which just doesn't work. I think three levels of forwards compatibility are useful to consider: 1. Frozen. Things that you can absolutely rely on, we will NEVER change it under any circumstances. Any bugs in the design will never be fixed. 2. Stable. We will attempt to minimize changes, but we don't guarantee your code will never break. (It may break in order to prevent breakage of things in case 1, for example). We can guarantee a deprecation path in most cases. 3. We will avoid gratuitous changes, but it will almost certainly change in the future. And what we want to do, is gradually move as many things as we can from category (2) into category (1), and from (3) into (2). I'd like to see us giving a lot more guarantees, rather than trying to keep promises we never actually made.
Mar 13 2014
prev sibling next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
I'd like to address, in general, the issue of, what I term, "performance by 
default" which is part of the argument for final by default.

C, C++, and D are billed as languages for writing high performance apps. And 
this is true. What is not true, or at least has not been true in my experience, 
is that an application is high performance merely because it is written in C, 
C++ or D.

Let me emphasize, code is not fast just because it is written using a high 
performance language.

High performance code does not happen by accident, it has to be intentionally 
written that way. Furthermore, I can pretty much guarantee you that if an 
application has never been profiled, its speed can be doubled by using a 
profiler. And if you really, really want high performance code, you're going to 
have to spend time looking at the assembler dumps of the code and tweaking the 
source code to get things right.

High performance code is not going to emanate from those programmers who are
not 
skilled in the art, it is not going to happen by accident, it is not going to 
happen by following best practices, it is not going to happen just because 
you're writing in C/C++/D.

D certainly provides what is necessary to write code that blows away 
conventional C or C++ code.

It reminds me of when I worked in a machine shop in college. I'd toil for hours 
cutting parts, and the parts were not round, the holes were off center, heck, 
the surfaces weren't that smooth. There was an older machinist there who'd take 
pity on me. He'd look at what I was doing, cluck cluck, he'd touch the bit with 
a grinder, he'd tweak the feed speed, he'd make arcane adjustments to the 
machine tool, and out would come perfect parts. I am an awe to this day of his 
skill - I still don't know how he did it. The point is, he and I were using the 
same tools. He knew how to make them sing, I didn't.

I still cannot drill a goddam hole and get it where I measured it should be.
Mar 12 2014
next sibling parent reply Manu <turkeyman gmail.com> writes:
On 13 March 2014 15:51, Walter Bright <newshound2 digitalmars.com> wrote:

 I'd like to address, in general, the issue of, what I term, "performance
 by default" which is part of the argument for final by default.

 C, C++, and D are billed as languages for writing high performance apps.
 And this is true. What is not true, or at least has not been true in my
 experience, is that an application is high performance merely because it is
 written in C, C++ or D.

 Let me emphasize, code is not fast just because it is written using a high
 performance language.

 High performance code does not happen by accident, it has to be
 intentionally written that way. Furthermore, I can pretty much guarantee
 you that if an application has never been profiled, its speed can be
 doubled by using a profiler. And if you really, really want high
 performance code, you're going to have to spend time looking at the
 assembler dumps of the code and tweaking the source code to get things
 right.

 High performance code is not going to emanate from those programmers who
 are not skilled in the art, it is not going to happen by accident, it is
 not going to happen by following best practices, it is not going to happen
 just because you're writing in C/C++/D.

 D certainly provides what is necessary to write code that blows away
 conventional C or C++ code.
But you understand the danger in creating a situation where experts can't optimise their code even if they want to; if at some later time it becomes an issue, or some new customer comes along with more stringent requirements. These are not unrealistic hypothetical scenarios. Libraries exist, and they have customers by definition. Requirements change over time. Defaulting to an inflexible position is dangerous. There's another programming principle; beware of early-optimisation. Many people swear by this. These 2 situations are at odds. It reminds me of when I worked in a machine shop in college. I'd toil for
 hours cutting parts, and the parts were not round, the holes were off
 center, heck, the surfaces weren't that smooth. There was an older
 machinist there who'd take pity on me. He'd look at what I was doing, cluck
 cluck, he'd touch the bit with a grinder, he'd tweak the feed speed, he'd
 make arcane adjustments to the machine tool, and out would come perfect
 parts. I am an awe to this day of his skill - I still don't know how he did
 it. The point is, he and I were using the same tools. He knew how to make
 them sing, I didn't.

 I still cannot drill a goddam hole and get it where I measured it should
 be.
Mar 12 2014
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 3/12/2014 11:44 PM, Manu wrote:
 But you understand the danger in creating a situation where experts can't
 optimise their code even if they want to; if at some later time it becomes an
 issue, or some new customer comes along with more stringent requirements.
 These are not unrealistic hypothetical scenarios. Libraries exist, and they
have
 customers by definition. Requirements change over time. Defaulting to an
 inflexible position is dangerous.
As I pointed out in another post, virtuality is hardly the only thing that can change in an interface that strongly affects performance. The existence of destructors is another. About everything about an interface affects performance.
Mar 13 2014
parent reply Manu <turkeyman gmail.com> writes:
On 14 March 2014 16:39, Walter Bright <newshound2 digitalmars.com> wrote:

 On 3/12/2014 11:44 PM, Manu wrote:

 But you understand the danger in creating a situation where experts can't
 optimise their code even if they want to; if at some later time it
 becomes an
 issue, or some new customer comes along with more stringent requirements.
 These are not unrealistic hypothetical scenarios. Libraries exist, and
 they have
 customers by definition. Requirements change over time. Defaulting to an
 inflexible position is dangerous.
As I pointed out in another post, virtuality is hardly the only thing that can change in an interface that strongly affects performance. The existence of destructors is another. About everything about an interface affects performance.
In my experience, API layout is the sort of performance detail that library authors are much more likely to carefully consider and get right. It's higher level, easier to understand, and affects all architectures equally. It's also something that they teach in uni. People write books about that sort of thing. Not to say there aren't terrible API designs out there, but D doesn't make terrible-api-design-by-default a feature. Stuff like virtual is the sort of thing that only gets addressed when it is reported by a user that cares, and library authors are terribly reluctant to implement a breaking change because some user reported it. I know this from experience. I can say with confidence, poor API design has caused me less problems than virtual in my career. Can you honestly tell me that you truly believe that library authors will consider, as a matter of common sense, the implications of virtual (the silent default state) in their api? Do you truly believe that I'm making a big deal out of nothing; that I will never actually, in practise, encounter trivial accessors and properties that can't inline appearing in my hot loops, or other related issues. Inline-ability is a very strong API level performance influence, especially in a language with properties. Most programmers are not low-level experts, they don't know how to protect themselves from this sort of thing. Honestly, almost everyone will just stick with the default.
Mar 14 2014
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 3/14/2014 5:06 AM, Manu wrote:
 In my experience, API layout is the sort of performance detail that library
 authors are much more likely to carefully consider and get right. It's higher
 level, easier to understand, and affects all architectures equally.
 It's also something that they teach in uni. People write books about that sort
 of thing.
 Not to say there aren't terrible API designs out there, but D doesn't make
 terrible-api-design-by-default a feature.
 Stuff like virtual is the sort of thing that only gets addressed when it is
 reported by a user that cares, and library authors are terribly reluctant to
 implement a breaking change because some user reported it. I know this from
 experience.
 I can say with confidence, poor API design has caused me less problems than
 virtual in my career.

 Can you honestly tell me that you truly believe that library authors will
 consider, as a matter of common sense, the implications of virtual (the silent
 default state) in their api?
 Do you truly believe that I'm making a big deal out of nothing; that I will
 never actually, in practise, encounter trivial accessors and properties that
 can't inline appearing in my hot loops, or other related issues.

 Inline-ability is a very strong API level performance influence, especially in
a
 language with properties.

 Most programmers are not low-level experts, they don't know how to protect
 themselves from this sort of thing. Honestly, almost everyone will just stick
 with the default.
I find it incongruous to take the position that programmers know all about layout for performance and nothing about function indirection? It leads me to believe that these programmers never once tested their code for performance. I know what I'm doing, and even I, when I don't test things, always make some innocuous mistake that eviscerates performance. I find it very hard to believe that final-by-default will fix untested code. And the library APIs still are fixable. Consider: class C { void foo() { ... } } and foo() needs to be final for performance, but we don't want to break existing users: class C { void foo() { foo2(); } final void foo2() { ... } }
Mar 14 2014
parent reply Manu <turkeyman gmail.com> writes:
On 15 March 2014 10:49, Walter Bright <newshound2 digitalmars.com> wrote:

 On 3/14/2014 5:06 AM, Manu wrote:

 In my experience, API layout is the sort of performance detail that
 library
 authors are much more likely to carefully consider and get right. It's
 higher
 level, easier to understand, and affects all architectures equally.
 It's also something that they teach in uni. People write books about that
 sort
 of thing.
 Not to say there aren't terrible API designs out there, but D doesn't make
 terrible-api-design-by-default a feature.
 Stuff like virtual is the sort of thing that only gets addressed when it
 is
 reported by a user that cares, and library authors are terribly reluctant
 to
 implement a breaking change because some user reported it. I know this
 from
 experience.
 I can say with confidence, poor API design has caused me less problems
 than
 virtual in my career.

 Can you honestly tell me that you truly believe that library authors will
 consider, as a matter of common sense, the implications of virtual (the
 silent
 default state) in their api?
 Do you truly believe that I'm making a big deal out of nothing; that I
 will
 never actually, in practise, encounter trivial accessors and properties
 that
 can't inline appearing in my hot loops, or other related issues.

 Inline-ability is a very strong API level performance influence,
 especially in a
 language with properties.

 Most programmers are not low-level experts, they don't know how to protect
 themselves from this sort of thing. Honestly, almost everyone will just
 stick
 with the default.
I find it incongruous to take the position that programmers know all about layout for performance and nothing about function indirection? It leads me to believe that these programmers never once tested their code for performance.
They probably didn't. Library authors often don't if it's not a library specifically intended for aggressive realtime use. Like most programmers, especially PC programmers, their opinion is often "that's the optimisers job". That said, function inlining is perhaps the single most important API level performance detail, and especially true in OO code (which advocates accessors/properties). Function calls scattered throughout your function serialise your code; they inhibit the optimiser from pipelining properly in many cases, ie, rescheduling across a function call is often dangerous, and compilers will always take a conservative approach. Locals may need to be saved to the stack across trivial function calls. I'm certain it will make a big difference in many instances. Compile some release code without -inline and see what the performance difference is, that is probably a fairly realistic measure of the penalty to expect in OO-heavy code. I know what I'm doing, and even I, when I don't test things, always make
 some innocuous mistake that eviscerates performance. I find it very hard to
 believe that final-by-default will fix untested code.
I don't find it hard to believe at all, infact, I find it very likely that there will be a significant benefit to client code that the library author will probably have never given a moments thought to. It's usually considered fairly reasonable for programmers to trust the optimiser to at least do a decent job. virtual-by-default inhibits many of the most important optimisations; inlining, rescheduling, pipelining, and also increases pressure on the stack and caches. And that's the whole thing here... I just don't see this as obscure or unlikely at all. If I did, I wouldn't care anywhere near as much as I do. All code has loops somewhere. And the library APIs still are fixable. Consider:
     class C {
         void foo() { ... }
     }

 and foo() needs to be final for performance, but we don't want to break
 existing users:

     class C {
         void foo() { foo2(); }
         final void foo2() { ... }
     }
The length you're willing to go to to resist a relatively minor breaking change, with an unusually smooth migration path, that virtually everyone agrees with is surprising to me. Daniel Murphy revealed that it only affects 13% of classes in DMD's OO heavy code. That is in line with my past predictions; most classes aren't base classes, so most classes aren't actually affected. I understand that you clearly don't believe in this change, and I grant that is your prerogative, but I really don't get why... I just can't see it when considering the balance. Obviously I care about the compiler's codegen more than the average guy; but as I see it, that's a compiler's primary purpose, and programmers are supposed to be able to trust that it can and will do it well. My questions above, they were serious questions. Please, I would really like to hear you answer those questions, or rephrase them if you like... Can you honestly tell me that you truly believe that library authors will consider, as a matter of common sense, the implications of virtual (the silent default state) in their api? Or you don't consider that to be something worth worrying about, ie, you truly believe that I'm making a big deal out of nothing; that I will never actually, in practise, encounter trivial accessors and properties that can't inline appearing in my hot loops, or other related issues? I won't post any more on the topic.
Mar 14 2014
next sibling parent reply "deadalnix" <deadalnix gmail.com> writes:
On Saturday, 15 March 2014 at 04:03:20 UTC, Manu wrote:
 That said, function inlining is perhaps the single most 
 important API level
 performance detail, and especially true in OO code (which 
 advocates
 accessors/properties).
OOP say ask, don't tell. Accessors, especially getters, are very anti OOP. The haskell of OOP would prevent you from returning anything from a function.
Mar 14 2014
next sibling parent "Araq" <rumpf_a web.de> writes:
On Saturday, 15 March 2014 at 05:29:04 UTC, deadalnix wrote:
 On Saturday, 15 March 2014 at 04:03:20 UTC, Manu wrote:
 That said, function inlining is perhaps the single most 
 important API level
 performance detail, and especially true in OO code (which 
 advocates
 accessors/properties).
OOP say ask, don't tell. Accessors, especially getters, are very anti OOP. The haskell of OOP would prevent you from returning anything from a function.
Yeah, as I said, OO encourages bad design like no other paradigm...
Mar 15 2014
prev sibling parent reply Paulo Pinto <pjmlp progtools.org> writes:
Am 15.03.2014 06:29, schrieb deadalnix:
 On Saturday, 15 March 2014 at 04:03:20 UTC, Manu wrote:
 That said, function inlining is perhaps the single most important API
 level
 performance detail, and especially true in OO code (which advocates
 accessors/properties).
OOP say ask, don't tell. Accessors, especially getters, are very anti OOP. The haskell of OOP would prevent you from returning anything from a function.
What?!? Looking at Smalltalk, SELF, CLOS and Eiffel I fail to see what you mean, given that they are the grand daddies of OOP and all have getters/properties. -- Paulo
Mar 15 2014
parent reply "deadalnix" <deadalnix gmail.com> writes:
On Saturday, 15 March 2014 at 08:32:32 UTC, Paulo Pinto wrote:
 Am 15.03.2014 06:29, schrieb deadalnix:
 On Saturday, 15 March 2014 at 04:03:20 UTC, Manu wrote:
 That said, function inlining is perhaps the single most 
 important API
 level
 performance detail, and especially true in OO code (which 
 advocates
 accessors/properties).
OOP say ask, don't tell. Accessors, especially getters, are very anti OOP. The haskell of OOP would prevent you from returning anything from a function.
What?!? Looking at Smalltalk, SELF, CLOS and Eiffel I fail to see what you mean, given that they are the grand daddies of OOP and all have getters/properties.
And LISP is grand daddy of functional, and do not have most of the features of modern functional languages. OOP is about asking the object to do something, not getting infos from an object and acting depending on that. In pseudo code, you'd prefers object.DoTheJob() to auto infos = object.getInfos(); doTheJob(infos); If you push that principle to the extreme, you must not return anything. Obviously, most principle pushed to the extreme often become impractical, btu here you go. Now, what about a processing that would give me back a result ? Like the following: auto f = new File("file"); writeln(f.getContent()); Sound cool, right ? But you are telling not asking. You could do: interface FileProcessor { void processContent(string); } class WritelnFileProcessor : FileProcessor { void processContent(string s) { writeln(s); } } auto f = new File("file"); f.process(new WritelnFileProcessor()); This has several advantages that I don't have much time to expose in detail. For instance, the FileContentProcessor could have more methods, so it can express a richer interface that what could be returned. If some checks need to be done (for security reasons for instance) you can ensure withing the File class that they are done properly. It makes things easier to test. You can completely change the way the file class works internally without disturbing any of the code that use it, etc ... However, this is clear that it come at a cost. I don't doubt an OO language pushing this to the extreme would see concept that confuse everybody emerging, pretty much like monads confuse the hell out of everybody in functional languages.
Mar 15 2014
parent reply "Araq" <rumpf_a web.de> writes:
 However, this is clear that it come at a cost. I don't doubt an 
 OO language pushing this to the extreme would see concept that 
 confuse everybody emerging, pretty much like monads confuse the 
 hell out of everybody in functional languages.
Looks like explicit continuation passing style to me. So "OO done right" means "Human compiler at work"...
Mar 15 2014
parent reply "deadalnix" <deadalnix gmail.com> writes:
On Saturday, 15 March 2014 at 20:15:16 UTC, Araq wrote:
 However, this is clear that it come at a cost. I don't doubt 
 an OO language pushing this to the extreme would see concept 
 that confuse everybody emerging, pretty much like monads 
 confuse the hell out of everybody in functional languages.
Looks like explicit continuation passing style to me. So "OO done right" means "Human compiler at work"...
Sound like you are enjoying criticize OOP, but so far, you didn't come up with anything interesting. Please bring something to the table or cut the noise.
Mar 15 2014
parent reply "Araq" <rumpf_a web.de> writes:
On Saturday, 15 March 2014 at 22:50:27 UTC, deadalnix wrote:
 On Saturday, 15 March 2014 at 20:15:16 UTC, Araq wrote:
 However, this is clear that it come at a cost. I don't doubt 
 an OO language pushing this to the extreme would see concept 
 that confuse everybody emerging, pretty much like monads 
 confuse the hell out of everybody in functional languages.
Looks like explicit continuation passing style to me. So "OO done right" means "Human compiler at work"...
Sound like you are enjoying criticize OOP, but so far, you didn't come up with anything interesting. Please bring something to the table or cut the noise.
I note that you are not able to counter my argument and so you escape to the meta level. But don't worry, I won't reply anymore.
Mar 16 2014
parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Sunday, 16 March 2014 at 13:23:33 UTC, Araq wrote:
 I note that you are not able to counter my argument and so you 
 escape to the meta level. But don't worry, I won't reply 
 anymore.
Discussing OO without a context is kind of pointless since there is multiple schools in the OO arena. The two main ones being: 1. The original OO analysis & design set forth by the people behind Simula67. Which basically is about representing abstractions (subsets) of the real word in the computer. 2. The ADT approach which you find in C++ std libraries & co. These two perspectives are largely orthogonal… That said, I think it to be odd to not use the term "virtual" since it has a long history (Simula has the "virtual" keyword). It would look like a case of being different for the sake of being different. Then again, I don't really mind virtual by default if whole program optimization is still a future goal for D. Ola.
Mar 16 2014
parent reply Manu <turkeyman gmail.com> writes:
On 17 March 2014 01:25,
<7d89a89974b0ff40.invalid internationalized.invalid>wrote:

 On Sunday, 16 March 2014 at 13:23:33 UTC, Araq wrote:

 I note that you are not able to counter my argument and so you escape to
 the meta level. But don't worry, I won't reply anymore.
Discussing OO without a context is kind of pointless since there is multiple schools in the OO arena. The two main ones being: 1. The original OO analysis & design set forth by the people behind Simula67. Which basically is about representing abstractions (subsets) of the real word in the computer. 2. The ADT approach which you find in C++ std libraries & co. These two perspectives are largely orthogonal=E2=80=A6 That said, I think it to be odd to not use the term "virtual" since it ha=
s
 a long history (Simula has the "virtual" keyword). It would look like a
 case of being different for the sake of being different.

 Then again, I don't really mind virtual by default if whole program
 optimization is still a future goal for D.
Whole program optimisation can't do anything to improve the situation; it is possible that DLL's may be loaded at runtime, so there's nothing the optimiser can do, even at link time.
Mar 16 2014
next sibling parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
Manu wrote:
 Whole program optimisation can't do anything to improve the 
 situation; it
 is possible that DLL's may be loaded at runtime, so there's 
 nothing the
 optimiser can do, even at link time.
Not really true. If you know the instance type then you can inline. It is only when you call through the super class of the instance that you have to explicitly call a function through a pointer. With a compiler switch or pragmas that tell the compiler what can be dynamically subclassed the compiler can assume all leaves in the compile time specialization hierarchies to be final.
Mar 16 2014
parent reply Marco Leise <Marco.Leise gmx.de> writes:
Am Mon, 17 Mar 2014 04:37:10 +0000
schrieb "Ola Fosheim Gr=C3=B8stad"
<ola.fosheim.grostad+dlang gmail.com>:

 Manu wrote:
 Whole program optimisation can't do anything to improve the=20
 situation; it
 is possible that DLL's may be loaded at runtime, so there's=20
 nothing the
 optimiser can do, even at link time.
=20 Not really true. If you know the instance type then you can=20 inline. =20 It is only when you call through the super class of the instance=20 that you have to explicitly call a function through a pointer.
About two years ago we had that discussion and my opinion remains that there are too many "if"s and "assume"s for the compiler. It is not so simple to trace back where an object originated from when you call a method on it. It could be created though the factory mechanism in Object using a runtime string or it could have been passed through a delegate like this: window.onClick(myObject); There are plenty of situations where it is virtually impossible to know the instance type statically. Whole program analysis only works on ... well, whole programs. If you split off a library or two it doesn't work. E.g. you have your math stuff in a library and in your main program you write: Matrix m1, m2; m1.crossProduct(m2); Inside crossProduct (which is in the math lib), the compiler could not statically verify if it is the Matrix class or a sub-class.
 With a compiler switch or pragmas that tell the compiler what can=20
 be dynamically subclassed the compiler can assume all leaves in=20
 the compile time specialization hierarchies to be final.
Can you explain, how this would work and where it is used? -nosubclasses=3Dmath.matrix.Matrix would be the same as using this in the project, no?: final class FinalMatrix : Matrix {} --=20 Marco
Mar 16 2014
parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Monday, 17 March 2014 at 06:26:09 UTC, Marco Leise wrote:
 About two years ago we had that discussion and my opinion
 remains that there are too many "if"s and "assume"s for the
 compiler.
 It is not so simple to trace back where an object originated
 from when you call a method on it.
It might not be easy, but in my view the language should be designed to support future advanced compilers. If D gains traction on the C++ level then the resources will become available iff the language has the right constructs or affords extensions that makes advanced optimizations tractable. What is possible today is less imoortant... >It could be created though
 the factory mechanism in Object using a runtime string or it
If it is random then you know that it is random. If you want speed you create separate paths for the dominant instance types. Whole program optimizations is guided by profiling data.
 There are plenty of situations where it is virtually
 impossible to know the instance type statically.
But you might know that it is either A and B or C and D in most cases. Then you inline those cases and create specialized execution paths where profitable.
 Whole program analysis only works on ... well, whole programs.
 If you split off a library or two it doesn't work. E.g. you
 have your math stuff in a library and in your main program
 you write:

   Matrix m1, m2;
   m1.crossProduct(m2);

 Inside crossProduct (which is in the math lib), the compiler
 could not statically verify if it is the Matrix class or a
 sub-class.
In my view you should avoid not having source access, but even then it is sufficient to know the effect of the function. E.g. you can have a high level specification language asserting pre and post conditions if you insist on closed source.
 With a compiler switch or pragmas that tell the compiler what 
 can be dynamically subclassed the compiler can assume all 
 leaves in the compile time specialization hierarchies to be 
 final.
Can you explain, how this would work and where it is used?
You specify what plugins are allowed to do and access at whatever resolution is necessary to enable the optimizations your program needs? Ola.
Mar 17 2014
parent reply Marco Leise <Marco.Leise gmx.de> writes:
Am Mon, 17 Mar 2014 18:16:13 +0000
schrieb "Ola Fosheim Gr=C3=B8stad"
<ola.fosheim.grostad+dlang gmail.com>:

 On Monday, 17 March 2014 at 06:26:09 UTC, Marco Leise wrote:
 About two years ago we had that discussion and my opinion
 remains that there are too many "if"s and "assume"s for the
 compiler.
 It is not so simple to trace back where an object originated
 from when you call a method on it.
=20 It might not be easy, but in my view the language should be=20 designed to support future advanced compilers. If D gains=20 traction on the C++ level then the resources will become=20 available iff the language has the right constructs or affords=20 extensions that makes advanced optimizations tractable. What is=20 possible today is less imoortant...
Let's just say it will never detect all cases, so the "final" keyword will still be around. Can you find any research papers that indicate that such compiler technology can be implemented with satisfactory results? Because it just sounds like a nice idea on paper to me that only works when a lot of questions have been answered with yes.
   >It could be created though
 the factory mechanism in Object using a runtime string or it
=20 If it is random then you know that it is random.
These not entirely random objects from a class hierarchy could well have frequently used final methods like a name or position. I also mentioned objects passed as parameters into delegates.
 If you want=20
 speed you create separate paths for the dominant instance types.=20
 Whole program optimizations is guided by profiling data.
Another optimization, ok. The compiler still needs to know that the instance type cannot be sub-classed.
 There are plenty of situations where it is virtually
 impossible to know the instance type statically.
=20 But you might know that it is either A and B or C and D in most=20 cases. Then you inline those cases and create specialized=20 execution paths where profitable.
Thinking about it, it might not even be good to duplicate code. It could easily lead to instruction cache misses. Also this is way too much involvement from both the coder and the compiler. At this point I'd ask for "final" if it wasn't already there, if just to be sure the compiler gets it right.
 Whole program analysis only works on ... well, whole programs.
 If you split off a library or two it doesn't work. E.g. you
 have your math stuff in a library and in your main program
 you write:

   Matrix m1, m2;
   m1.crossProduct(m2);

 Inside crossProduct (which is in the math lib), the compiler
 could not statically verify if it is the Matrix class or a
 sub-class.
=20 In my view you should avoid not having source access, but even=20 then it is sufficient to know the effect of the function. E.g.=20 you can have a high level specification language asserting pre=20 and post conditions if you insist on closed source.
More shoulds and cans and ifs... :-(
 With a compiler switch or pragmas that tell the compiler what=20
 can be dynamically subclassed the compiler can assume all=20
 leaves in the compile time specialization hierarchies to be=20
 final.
Can you explain, how this would work and where it is used?
=20 You specify what plugins are allowed to do and access at whatever=20 resolution is necessary to enable the optimizations your program=20 needs? =20 Ola.
I don't get the big picture. What does the compiler have to do with plugins? And what do you mean by allowed to do and access and how does it interact with virtuality of a method? I'm confused. --=20 Marco
Mar 18 2014
parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Tuesday, 18 March 2014 at 13:01:56 UTC, Marco Leise wrote:
 Let's just say it will never detect all cases, so the "final"
 keyword will still be around. Can you find any research papers
 that indicate that such compiler technology can be implemented
 with satisfactory results? Because it just sounds like a nice
 idea on paper to me that only works when a lot of questions
 have been answered with yes.
I don't think this is such a theoretical interesting question. Isn't this actually a special case of a partial correctness proof where you try to establish constraints on types? I am sure you can find a lot of papers covering bits and pieces of that.
 These not entirely random objects from a class hierarchy could
 well have frequently used final methods like a name or
 position. I also mentioned objects passed as parameters into
 delegates.
I am not sure I understand what you are getting at. You start with the assumption that a pointer to base class A is the full set of that hierarchy. Then establish constraints for all the subclasses it cannot be. Best effort. Then you can inline any virtual function call that is not specialized across that constrained result set. Or you can inline all candidates in a switch statement and let the compiler do common subexpression elimination & co.
 If you want speed you create separate paths for the dominant 
 instance types. Whole program optimizations is guided by 
 profiling data.
Another optimization, ok. The compiler still needs to know that the instance type cannot be sub-classed.
Not really. It only needs to know that in the current execution path you do have instance of type X (which is most frequent) then you have another execution path for the inverted set.
 Thinking about it, it might not even be good to duplicate
 code. It could easily lead to instruction cache misses.
You have heuristics for that. After all, you do have the execution pattern. You have the data of a running system on typical input. If you log all input events (which is useful for a simulation) you can rerun the program in as many configurations you want. Then you skip the optimizations that leads to worse performance.
 Also this is way too much involvement from both the coder and
 the compiler.
Why? Nobody claimed that near optimal whole program optimization has to be fast.
 At this point I'd ask for "final" if it wasn't already there, 
 if just to be sure the compiler gets it right.
Nobody said that you should not have final, but final won't help you inlining virtual functions where possible.
 you can have a high level specification language asserting pre 
 and post conditions if you insist on closed source.
More shoulds and cans and ifs... :-(
Err… well, you can of course start with a blank slate after calling a closed source library function.
 I don't get the big picture. What does the compiler have to do
 with plugins? And what do you mean by allowed to do and
 access and how does it interact with virtuality of a method?
 I'm confused.
In my view plugins should not be allowed to subclass. I think it is ugly, but IFF then you need to tell the compiler which classes it can subclass, instantiate etc. As well as what side effects the call to the plugin may and may not have. Why is that confusing? If you shake the world, you need to tell the compiler what the effect is. Otherwise you have to assume "anything" upon return from said function call. That said, I am personally not interested in plugins without constraints imposed on them (or at all). Most programs can do fine with just static linkage, so I find the whole dynamic linkage argument less interesting. Closed source library calls are more interesting, especially if you can say something about the state of that library. That could provide you with detectors for wrong library usage (which could be the OS itself). E.g. that a file has to be opened before it is closed etc.
Mar 18 2014
parent reply "dude" <nothx yahoo.com> writes:
  Nobody uses D, so worrying about breaking backwards compatibly 
for such an obvious improvement is pretty funny:)

  D should just do what Lua does.

  Lua breaks backwards compatibility at every version. Why is it 
not a problem? If you don't want to upgrade, just keep using the 
older compiler! It isn't like it ceased to exist--
Mar 18 2014
parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Tuesday, 18 March 2014 at 18:11:27 UTC, dude wrote:
 Nobody uses D, so worrying about breaking backwards compatibly 
 for such an obvious improvement is pretty funny:)
I kind of agree with you if it happens once and is a sweeping change that fix the syntactical warts as well as the semantical ones.
  Lua breaks backwards compatibility at every version. Why is it 
 not a problem? If you don't want to upgrade, just keep using 
 the older compiler! It isn't like it ceased to exist--
It is a problem because commercial developers have to count hours and need a production compiler that is maintained. If your budget is 4 weeks of development, then you don't want another 1 week to fix compiler induced bugs. Why? 1. Because you have already signed a contract on a certain amount of money based on estimates of how much work it is. All extra costs are cutting into profitability. 2. Because you have library dependencies. If a bug is fixed in library version 2 which requires version 3 of the compiler, then you need to upgrade to the version 3 of the compiler. That compiler better not break the entire application and bring you into a mess of unprofitable work. Is attracting commercial developers important for D? I think so, not because they contribute lots of code, but because they care about the production quality of the narrow libraries they do create and are more likely to maintain them over time. They also have a strong interest in submitting good bug reports and fixing performance bottle necks.
Mar 18 2014
parent reply "bearophile" <bearophileHUGS lycos.com> writes:
Ola Fosheim Grøstad:

 Is attracting commercial developers important for D?
In this phase of D life commercial developers can't justify to have the language so frozen that you can't perform reasonable improvements as the one discussed in this thread. Bye, bearophile
Mar 18 2014
parent "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Tuesday, 18 March 2014 at 18:34:14 UTC, bearophile wrote:
 In this phase of D life commercial developers can't justify to 
 have the language so frozen that you can't perform reasonable 
 improvements as the one discussed in this thread.
I don't disagree, but D is suffering from not having a production ready compiler/runtime with a solid optimizing backend in maintenance mode. So it is giving other languages "free traction" rather than securing its own position. I think there is a bit too much focus on standard libraries, because not having libraries does not prevent commercial adoption. Commercial devs can write their own C-bindings if the core language, compiler and runtime is solid. If the latter is not solid then only commercial devs that can commit lots of resources to D will pick it up and keep using it (basically the ones that are willing to turn themselves into D shops). Perhaps also D2 was announced too early, and then people jumped onto it expecting it to come about "real soon". Hopefully the language designers will do D3 design on paper behind closed doors for a while before announcing progress and perhaps even deliberately keep it at gamma/alpha quality in order to prevent devs jumping ship to D2 prematurely. :-) That is how I view it, anyway.
Mar 18 2014
prev sibling parent "Marc =?UTF-8?B?U2Now7x0eiI=?= <schuetzm gmx.net> writes:
On Monday, 17 March 2014 at 01:05:09 UTC, Manu wrote:
 Whole program optimisation can't do anything to improve the 
 situation; it
 is possible that DLL's may be loaded at runtime, so there's 
 nothing the
 optimiser can do, even at link time.
With everything being exported by default, this is true. But should DIP45 be implemented, LTO/WPO will be able to achieve a lot more, at least if the classes in question are not (or not fully) exported.
Mar 19 2014
prev sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 3/14/2014 9:02 PM, Manu wrote:
 That said, function inlining is perhaps the single most important API level
 performance detail, and especially true in OO code (which advocates
 accessors/properties).
I find it peculiar to desire a 'final accessor'. After all, class C { int x; final int getX() { return x; } <= what the heck is this function for? } The only reason to have an accessor function is so it can be virtual. If programmers are going to thoughtlessly follow rules like this, they might as well follow the rule: class C { final:
 Compile some release code without -inline and see what the performance
 difference is,
I'm well aware of the advantages of inline.
 The length you're willing to go to to resist a relatively minor breaking
change,
It's a major breaking change. It'll break nearly every D program out there that uses classes.
 I understand that you clearly don't believe in this change, and I grant that is
 your prerogative, but I really don't get why... I just can't see it when
 considering the balance.
You may not agree with me, but understanding my position shouldn't be too hard. I've expounded at length on it.
 Can you honestly tell me that you truly believe that library authors will
 consider, as a matter of common sense, the implications of virtual (the silent
 default state) in their api?
I thought I was clear in that I believe it is a pipe dream to believe that code with nary a thought given to performance is going to be performant. Besides, they don't have to consider anything or have any sense. Just blindly do this: class C { final:
 Or you don't consider that to be something worth worrying about, ie, you truly
 believe that I'm making a big deal out of nothing; that I will never actually,
 in practise, encounter trivial accessors and properties that can't inline
 appearing in my hot loops, or other related issues?
I think we're just going around in circles. I've discussed all this before, in this thread.
Mar 15 2014
next sibling parent "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Saturday, 15 March 2014 at 07:36:12 UTC, Walter Bright wrote:
 The only reason to have an accessor function is so it can be 
 virtual.
No: 1. To have more readable code: using x, y, z, w to access an array vector 2. Encapsulation/interfacing to differing implementations. Seems to me that the final by default transition can be automated by source translation. Please don't send D further into the land if obscurity by adding !final... At some point someone will create D--...
Mar 15 2014
prev sibling next sibling parent reply "Daniel Murphy" <yebbliesnospam gmail.com> writes:
"Walter Bright"  wrote in message news:lg0vtc$2q94$1 digitalmars.com...

 I find it peculiar to desire a 'final accessor'. After all,

      class C {
          int x;
          final int getX() { return x; } <= what the heck is this function 
 for?
      }
Yeah, it's stupid, but people do it all over the place anyway.
 It's a major breaking change. It'll break nearly every D program out there 
 that uses classes.
This is nonsense. I tried out the warning on some of my projects, and they required ZERO changes - because it's a warning! Phobos requires 37 "virtual:"s to be added - or just change the makefile to use '-wi' instead of '-w'. Druntime needed 25. We don't even need to follow the usual 6-months per stage deprecation - We could leave it as a warning for 2 years if we wanted! Grepping for class declarations and sticking in "virtual:" is as trivial as a fix can possibly be.
Mar 15 2014
next sibling parent reply Manu <turkeyman gmail.com> writes:
On 15 March 2014 18:50, Daniel Murphy <yebbliesnospam gmail.com> wrote:

 "Walter Bright"  wrote in message news:lg0vtc$2q94$1 digitalmars.com...


  I find it peculiar to desire a 'final accessor'. After all,
      class C {
          int x;
          final int getX() { return x; } <= what the heck is this function
 for?
      }
Yeah, it's stupid, but people do it all over the place anyway.
Religiously. They're taught to do this in books and at university, deliberately. Seriously though, there are often reasons to put an interface in the way; you can change the implementation without affecting the interface at some later time, data can be compressed or stored in an internal format that is optimal for internal usage, or some useful properties can be implied rather than stored explicitly. Programmers (reasonably) expect they are inlined. For instance framesPerSecond() and timeDelta() are the reciprocal of eachother, only one needs to be stored. I also have very many instances of classes with accessors to provide user-facing access to packed internal data, which may require some minor bit-twiddling and casting to access. I don't think this is unusual, any programmer is likely to do this. empty(), length(), front(), etc are classic examples where it might not just return a variable directly. Operator overloads... >_< It's a major breaking change. It'll break nearly every D program out there
 that uses classes.
This is nonsense. I tried out the warning on some of my projects, and they required ZERO changes - because it's a warning! Phobos requires 37 "virtual:"s to be added - or just change the makefile to use '-wi' instead of '-w'. Druntime needed 25. We don't even need to follow the usual 6-months per stage deprecation - We could leave it as a warning for 2 years if we wanted! Grepping for class declarations and sticking in "virtual:" is as trivial as a fix can possibly be.
My game that I'm hacking on at the moment has only 2 affected classes. The entire game is OO. Most virtuals are introduced by interfaces. So with that in mind, it's not even necessarily true that projects that use classes will be affected by this if they make use of interfaces (I certainly did at Remedy, exclusively). Phobos is a standard library, surely it's unacceptable for phobos calls to break the optimiser? Consider std.xml for instance; 100% certain to appear in hot data crunching loops. What can be done about this? It can't be fixed, because that's a breaking change. Shall we document that phobos classes should be avoided or factored outside of high frequency code, and hope people read it?
Mar 15 2014
parent "Daniel Murphy" <yebbliesnospam gmail.com> writes:
"Manu" <turkeyman gmail.com> wrote in message 
news:mailman.133.1394879414.23258.digitalmars-d puremagic.com...

 Phobos is a standard library, surely it's unacceptable for phobos calls to 
 break the optimiser?
 Consider std.xml for instance; 100% certain to appear in hot data 
 crunching loops.
 What can be done about this? It can't be fixed, because that's a breaking 
 change. Shall we
 document that phobos classes should be avoided or factored outside of high 
 frequency code, and
 hope people read it?
I think std.xml should be avoided for other reasons...
Mar 15 2014
prev sibling next sibling parent "develop32" <develop32 gmail.com> writes:
On Saturday, 15 March 2014 at 08:50:00 UTC, Daniel Murphy wrote:
 This is nonsense.  I tried out the warning on some of my 
 projects, and they required ZERO changes - because it's a 
 warning!

 Phobos requires 37 "virtual:"s to be added - or just change the 
 makefile to use '-wi' instead of '-w'.  Druntime needed 25.

 We don't even need to follow the usual 6-months per stage 
 deprecation - We could leave it as a warning for 2 years if we 
 wanted!

 Grepping for class declarations and sticking in "virtual:" is 
 as trivial as a fix can possibly be.
When virtual keyword was introduced in Github master I immediately went to add a bunch of "virtual" in my projects... only to find myself done after few minutes. I see some irony in the fact that if classes are made final-by-default, removing all the unnecessary "final" attributes would be an order of magnitude longer task.
Mar 15 2014
prev sibling parent reply "bearophile" <bearophileHUGS lycos.com> writes:
Daniel Murphy:

 This is nonsense.  I tried out the warning on some of my 
 projects, and they required ZERO changes - because it's a 
 warning!

 Phobos requires 37 "virtual:"s to be added - or just change the 
 makefile to use '-wi' instead of '-w'.  Druntime needed 25.
Andrei has decided to not introduce "final by default" because he thinks it's a too much large breaking change. So your real world data is an essential piece of information to perform an informed decision on this topic (so much essential that I think deciding before having such data is void decision). So are you willing to perform your analysis on some other real D code? Perhaps dub? Bye, bearophile
Mar 15 2014
next sibling parent "bearophile" <bearophileHUGS lycos.com> writes:
 So are you willing to perform your analysis on some other real 
 D code? Perhaps dub?
Or vibe? Bye, bearophile
Mar 15 2014
prev sibling parent reply "Daniel Murphy" <yebbliesnospam gmail.com> writes:
"bearophile"  wrote in message news:yzhcfevgjdzjtzghxtia forum.dlang.org...

 Andrei has decided to not introduce "final by default" because he thinks 
 it's a too much large breaking change. So your real world data is an 
 essential piece of information to perform an informed decision on this 
 topic (so much essential that I think deciding before having such data is 
 void decision). So are you willing to perform your analysis on some other 
 real D code? Perhaps dub?
If anyone wants to try this out on their code, the patch I used was to add this: if (ad && !ad->isInterfaceDeclaration() && isVirtual() && !isFinal() && !isOverride() && !(storage_class & STCvirtual) && !(ad->storage_class & STCfinal)) { warning(loc, "virtual required"); } Around line 623 in func.c (exact line doesn't matter, just stick it in with the rest of the checks) I also had to disable the "static member functions cannot be virtual" error.
Mar 15 2014
parent "bearophile" <bearophileHUGS lycos.com> writes:
Daniel Murphy:

 If anyone wants to try this out on their code, the patch I used 
 was to add this:

 if (ad && !ad->isInterfaceDeclaration() && isVirtual() && 
 !isFinal() &&
    !isOverride() && !(storage_class & STCvirtual) && 
 !(ad->storage_class & STCfinal))
 {
    warning(loc, "virtual required");
 }

 Around line 623 in func.c (exact line doesn't matter, just 
 stick it in with the rest of the checks)

 I also had to disable the "static member functions cannot be 
 virtual" error.
In the meantime has someone else measured experimentally the amount of breakage a "final by default" causes in significant D programs? Bye, bearophile
Mar 16 2014
prev sibling next sibling parent "monarch_dodra" <monarchdodra gmail.com> writes:
On Saturday, 15 March 2014 at 07:36:12 UTC, Walter Bright wrote:
 On 3/14/2014 9:02 PM, Manu wrote:
 That said, function inlining is perhaps the single most 
 important API level
 performance detail, and especially true in OO code (which 
 advocates
 accessors/properties).
I find it peculiar to desire a 'final accessor'. After all, class C { int x; final int getX() { return x; } <= what the heck is this function for? } The only reason to have an accessor function is so it can be virtual.
Um... Read only attributes? Forgot the discussions about property ? This makes sense to me: class C { private int _x; ///Gets x final int x() property { return x; } }
Mar 15 2014
prev sibling parent reply Paulo Pinto <pjmlp progtools.org> writes:
Am 15.03.2014 08:36, schrieb Walter Bright:
 On 3/14/2014 9:02 PM, Manu wrote:
 That said, function inlining is perhaps the single most important API
 level
 performance detail, and especially true in OO code (which advocates
 accessors/properties).
I find it peculiar to desire a 'final accessor'. After all, class C { int x; final int getX() { return x; } <= what the heck is this function for? } The only reason to have an accessor function is so it can be virtual.
I don't agree. In any language with properties, accessors also allow for: - lazy initialization - changing the underlying data representation without requiring client code to be rewritten - implement access optimizations if the data is too costly to keep around -- Paulo
Mar 15 2014
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 3/15/2014 2:21 AM, Paulo Pinto wrote:
 In any language with properties, accessors also allow for:

 - lazy initialization

 - changing the underlying data representation without requiring client code to
 be rewritten

 - implement access optimizations if the data is too costly to keep around
You can always add a property function later without changing user code.
Mar 15 2014
next sibling parent reply Michel Fortin <michel.fortin michelf.ca> writes:
On 2014-03-15 18:18:27 +0000, Walter Bright <newshound2 digitalmars.com> said:

 On 3/15/2014 2:21 AM, Paulo Pinto wrote:
 In any language with properties, accessors also allow for:
 
 - lazy initialization
 
 - changing the underlying data representation without requiring client code to
 be rewritten
 
 - implement access optimizations if the data is too costly to keep around
You can always add a property function later without changing user code.
In some alternate universe where clients restrict themselves to documented uses of APIs yes. Not if the client decides he want to use ++ on the variable, or take its address, or pass it by ref to another function (perhaps without even noticing). And it also breaks binary compatibility. If you control the whole code base it's reasonable to say you won't bother with properties until they're actually needed for some reason. It's easy enough to refactor your things whenever you decide to make the change. But if you're developing a library for other to use though, it's better to be restrictive from the start... if you care about not breaking your client's code base that is. It basically comes to the same reasons as to why final-by-default is better than virtual-by-default: it's better to start with a restrictive API and then expand the API as needed than being stuck with an API that restricts your implementation choices later on. -- Michel Fortin michel.fortin michelf.ca http://michelf.ca
Mar 15 2014
parent Walter Bright <newshound2 digitalmars.com> writes:
On 3/15/2014 11:33 AM, Michel Fortin wrote:
 And it also breaks binary compatibility.
Inlining also breaks binary compatibility. If you want optimizations, and be able to change things, you've got to give up binary compatibility. If you want maximum flexibility, such as changing classes completely, use interfaces with virtual dispatch. Maximum flexibility, maximum optimization, and binary compatibility, all while not putting any thought into the API design, isn't going to happen no matter what the defaults are.
Mar 15 2014
prev sibling next sibling parent reply "Kapps" <opantm2+spam gmail.com> writes:
On Saturday, 15 March 2014 at 18:18:28 UTC, Walter Bright wrote:
 On 3/15/2014 2:21 AM, Paulo Pinto wrote:
 In any language with properties, accessors also allow for:

 - lazy initialization

 - changing the underlying data representation without 
 requiring client code to
 be rewritten

 - implement access optimizations if the data is too costly to 
 keep around
You can always add a property function later without changing user code.
In many situations you can't. As was already mentioned, ++ and taking the address of it were two such situations. ABI compatibility is also a large problem (less so in D for now, but it will be in the future). Structs change, positions change, data types change. If users use your struct directly, accessing its fields, then once you make even a minor change, their code will break in unpredictable ways. This was a huge annoyance for me when trying to deal with libjpeg. There are multiple versions and these versions have a different layout for the struct. If the wrong library is linked, the layout is different. Since it's a D binding to a C file, you can't just use the C header which you know to be up to date on your system, instead you have to make your own binding and hope for the best. They tr y to work around this by making you pass in a version string when creating the libjpeg structs and failing if this string does not exactly match what the loaded version. This creates a further mess. It's a large problem, and there's talk of trying to eventually deprecate public field access in libjpeg in favour of accessors like libpng has done (though libpng still uses the annoying passing in version since they did not use accessors from the start and some fields remained public). Accessors are absolutely required if you intend to make a public library and exposed fields should be avoided completely.
Mar 15 2014
parent reply Marco Leise <Marco.Leise gmx.de> writes:
Am Sat, 15 Mar 2014 21:25:51 +0000
schrieb "Kapps" <opantm2+spam gmail.com>:

 On Saturday, 15 March 2014 at 18:18:28 UTC, Walter Bright wrote:
 On 3/15/2014 2:21 AM, Paulo Pinto wrote:
 In any language with properties, accessors also allow for:

 - lazy initialization

 - changing the underlying data representation without 
 requiring client code to
 be rewritten

 - implement access optimizations if the data is too costly to 
 keep around
You can always add a property function later without changing user code.
In many situations you can't. As was already mentioned, ++ and taking the address of it were two such situations. ABI compatibility is also a large problem (less so in D for now, but it will be in the future). Structs change, positions change, data types change. If users use your struct directly, accessing its fields, then once you make even a minor change, their code will break in unpredictable ways. This was a huge annoyance for me when trying to deal with libjpeg. There are multiple versions and these versions have a different layout for the struct. If the wrong library is linked, the layout is different. Since it's a D binding to a C file, you can't just use the C header which you know to be up to date on your system, instead you have to make your own binding and hope for the best. They tr y to work around this by making you pass in a version string when creating the libjpeg structs and failing if this string does not exactly match what the loaded version. This creates a further mess. It's a large problem, and there's talk of trying to eventually deprecate public field access in libjpeg in favour of accessors like libpng has done (though libpng still uses the annoying passing in version since they did not use accessors from the start and some fields remained public). Accessors are absolutely required if you intend to make a public library and exposed fields should be avoided completely.
What about the way Microsoft went with the Win32 API? - struct fields are exposed - layouts may change only by appending fields to them - they are always passed by pointer - the actual size is stored in the first data field I think this is worth a look. Since all these function calls don't come for free. (Imagine a photo management software that has to check various properties of 20_000 images.) -- Marco
Mar 15 2014
next sibling parent "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
 I think this is worth a look. Since all these function calls
 don't come for free. (Imagine a photo management software
 that has to check various properties of 20_000 images.)
It comes for free if you enforce inlining and recompile for major revisions of libs.
Mar 15 2014
prev sibling parent Jacob Carlborg <doob me.com> writes:
On 2014-03-16 00:11, Marco Leise wrote:

 What about the way Microsoft went with the Win32 API?
 - struct fields are exposed
 - layouts may change only by appending fields to them
 - they are always passed by pointer
 - the actual size is stored in the first data field

 I think this is worth a look. Since all these function calls
 don't come for free. (Imagine a photo management software
 that has to check various properties of 20_000 images.)
The modern runtime for Objective-C has a non-fragile ABI for its classes. Instead of accessing a field with an compile time known offset an offset calculated at runtime/load time is used when accessing a field. This allows to freely reorganize fields without breaking subclasses. -- /Jacob Carlborg
Mar 16 2014
prev sibling parent "Anonymouse" <asdf asdf.com> writes:
On Saturday, 15 March 2014 at 18:18:28 UTC, Walter Bright wrote:
 On 3/15/2014 2:21 AM, Paulo Pinto wrote:
 In any language with properties, accessors also allow for:

 - lazy initialization

 - changing the underlying data representation without 
 requiring client code to
 be rewritten

 - implement access optimizations if the data is too costly to 
 keep around
You can always add a property function later without changing user code.
Cough getopt.
Mar 15 2014
prev sibling parent "Paulo Pinto" <pjmlp progtools.org> writes:
On Thursday, 13 March 2014 at 05:51:05 UTC, Walter Bright wrote:
 I'd like to address, in general, the issue of, what I term, 
 "performance by default" which is part of the argument for 
 final by default.

 C, C++, and D are billed as languages for writing high 
 performance apps. And this is true. What is not true, or at 
 least has not been true in my experience, is that an 
 application is high performance merely because it is written in 
 C, C++ or D.

 Let me emphasize, code is not fast just because it is written 
 using a high performance language.

 High performance code does not happen by accident, it has to be 
 intentionally written that way. Furthermore, I can pretty much 
 guarantee you that if an application has never been profiled, 
 its speed can be doubled by using a profiler. And if you 
 really, really want high performance code, you're going to have 
 to spend time looking at the assembler dumps of the code and 
 tweaking the source code to get things right.

 High performance code is not going to emanate from those 
 programmers who are not skilled in the art, it is not going to 
 happen by accident, it is not going to happen by following best 
 practices, it is not going to happen just because you're 
 writing in C/C++/D.

 D certainly provides what is necessary to write code that blows 
 away conventional C or C++ code.

 It reminds me of when I worked in a machine shop in college. 
 I'd toil for hours cutting parts, and the parts were not round, 
 the holes were off center, heck, the surfaces weren't that 
 smooth. There was an older machinist there who'd take pity on 
 me. He'd look at what I was doing, cluck cluck, he'd touch the 
 bit with a grinder, he'd tweak the feed speed, he'd make arcane 
 adjustments to the machine tool, and out would come perfect 
 parts. I am an awe to this day of his skill - I still don't 
 know how he did it. The point is, he and I were using the same 
 tools. He knew how to make them sing, I didn't.

 I still cannot drill a goddam hole and get it where I measured 
 it should be.
Fully agree with you. That is how we get around writing distributed applications in JVM/.NET with comparable performance to the C++ ones being replaced. -- Paulo
Mar 13 2014
prev sibling next sibling parent Jonathan M Davis <jmdavisProg gmx.com> writes:
On Wednesday, March 12, 2014 15:50:06 Walter Bright wrote:
 So, there's the solution that has been proposed before:
 
     !final
     !pure
     !nothrow
     etc.
It's arguably a bit ugly, but it's also arguably sorely needed - certainly the lack thereof has been complained about plenty of times before. So, completely aside from the issue of final by default, having a mechanism for negating all of these attributes would be quite valuable. And every other syntax that I can think of for this at the moment which wouldn't introduce another keyword would be overly verbose - though we were going to introduce keywords, something like not_final, not_pure, not_tothrow, etc. would be straightforward and probably wouldn't even break code (though you never know). The bang avoids that need though. The primary downside IMHO is how easily it could be missed when scanning over code, but we have that problem already when using bang in a condition, so that's nothing new. - Jonathan M Davis
Mar 13 2014
prev sibling next sibling parent reply Martin Krejcirik <mk-junk i-line.cz> writes:
To me it seems this whole debate highlights the problem in the release
cycle, not in the development of the language. DMD needs a long-term
stable point release, supported for at least a year. Everybody should
expect possible breakage in the latest development release, even if
unintended. The latest version should also be marked as
development/experimental on the download page.

-- 
mk
Mar 13 2014
next sibling parent "Andrea Fontana" <nospam example.com> writes:
+1

On Thursday, 13 March 2014 at 10:24:10 UTC, Martin Krejcirik 
wrote:
 To me it seems this whole debate highlights the problem in the 
 release
 cycle, not in the development of the language. DMD needs a 
 long-term
 stable point release, supported for at least a year. Everybody 
 should
 expect possible breakage in the latest development release, 
 even if
 unintended. The latest version should also be marked as
 development/experimental on the download page.
Mar 13 2014
prev sibling parent reply Jonathan M Davis <jmdavisProg gmx.com> writes:
On Thursday, March 13, 2014 11:24:10 Martin Krejcirik wrote:
 To me it seems this whole debate highlights the problem in the release
 cycle, not in the development of the language. DMD needs a long-term
 stable point release, supported for at least a year. Everybody should
 expect possible breakage in the latest development release, even if
 unintended. The latest version should also be marked as
 development/experimental on the download page.
Considering that a lot of the breakage that occurs (most?) actually results from fixing bugs, I don't think that trying to maintain "bugfix" releases is really going to help much. There may ultimately still be some value in them, but most people who suggest this sort of solution seem to be under the impression that it would somehow stop accidental code breakage, and that isn't the case at all. Pretty much the only way to guarantee no accidental breakage is to make no changes at all - including bug fixes. - Jonathan M Davis
Mar 13 2014
parent Martin Krejcirik <mk-junk i-line.cz> writes:
Dne 13.3.2014 12:07, Jonathan M Davis napsal(a):
 really going to help much. There may ultimately still be some value in them,
 but most people who suggest this sort of solution seem to be under the
 impression that it would somehow stop accidental code breakage, and that isn't
The point of long-term stable release is not that no bugs ever can be introduced. The point is that there are fewer changes and no language and library changes which force the user to update his source code are introduced (accept-invalid aside, maybe). Currently users have to choice either: - stick with some old release with no bugfixes at all - always upgrade to the latest release, which mixes bugfixes, language and library changes. What if user want just regressions and (some) major bugs fixed ? Also long-term stable release would make a base against which deprecations can be marked. -- mk
Mar 13 2014
prev sibling next sibling parent reply "Suliman" <evermind live.ru> writes:
A lot of people here are talking about C. They are saying that C 
almost do not have changes that had break compatibility and it's 
good. But people are forgot, that D is much more complex language 
then C is.


clone of Java very powerful and modern language.

But also look at Python. Pythons are afraid about backward 
compatibility for a long time and as result we have 2 
incompatibility version 2.x and 3.x. And 3.x version was released 
6 years ago (!).

I understand that people do not like to fixing code, but it's 
better to fix most important things now, than to get pain in the 
future.

IMHO.
Mar 13 2014
next sibling parent "Rikki Cattermole" <alphaglosined gmail.com> writes:
On Thursday, 13 March 2014 at 11:14:56 UTC, Suliman wrote:
 A lot of people here are talking about C. They are saying that 
 C almost do not have changes that had break compatibility and 
 it's good. But people are forgot, that D is much more complex 
 language then C is.


 stupid clone of Java very powerful and modern language.

 But also look at Python. Pythons are afraid about backward 
 compatibility for a long time and as result we have 2 
 incompatibility version 2.x and 3.x. And 3.x version was 
 released 6 years ago (!).

 I understand that people do not like to fixing code, but it's 
 better to fix most important things now, than to get pain in 
 the future.

 IMHO.
+1 Great example is how in Python 3.x that pyglet which is highly used for 3d games isn't really fully ported there. Although it does work, mostly. Or another example how PIL (Python Image Library) isn't ported in any form to 3.x yet. In some ways Python is in a worse position than D is for image libraries. Which is kinda scary. But in saying this its bound to happen when there is money to keep another major version updated.
Mar 13 2014
prev sibling parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 3/13/14, 4:14 AM, Suliman wrote:
 A lot of people here are talking about C. They are saying that C almost
 do not have changes that had break compatibility and it's good. But
 people are forgot, that D is much more complex language then C is.
C++. Andrei
Mar 13 2014
prev sibling next sibling parent "Namespace" <rswhite4 googlemail.com> writes:
    !final
    !pure
    !nothrow
    etc.
As an alternative: how about using a syntax similar to noexcept in C++11? http://en.cppreference.com/w/cpp/language/noexcept That would result in final(true) and final(false). Same for pure etc.
Mar 13 2014
prev sibling next sibling parent reply "Suliman" <evermind live.ru> writes:
It's maybe not proper topic but maybe someone disagreed with 
current way of D may interesting to continue Amber project. 
https://bitbucket.org/larsivi/amber/commits/all
Mar 13 2014
parent "bearophile" <bearophileHUGS lycos.com> writes:
Suliman:

 It's maybe not proper topic but maybe someone disagreed with 
 current way of D may interesting to continue Amber project. 
 https://bitbucket.org/larsivi/amber/commits/all
From this page: https://bitbucket.org/larsivi/amber/wiki/Diff_D1
 No comma expression
We could and should disallow some usages of the comma operator in D2 too.
 A class with no constructors inherits the base class 
 constructors, if any.
Is this a good idea?
 synchronized will be deprecated, and is already now just an 
 alias to the guard statement
 Struct-interfaces
 guard-statement
? Bye, bearophile
Mar 13 2014
prev sibling next sibling parent "Dejan Lekic" <dejan.lekic gmail.com> writes:
On Wednesday, 12 March 2014 at 22:50:00 UTC, Walter Bright wrote:
 The argument for final by default, as eloquently expressed by 
 Manu, is a good one. Even Andrei agrees with it (!).

 The trouble, however, was illuminated most recently by the 
 std.json regression that broke existing code. The breakage 
 wasn't even intentional; it was a mistake. The user fix was 
 also simple, just a tweak here and there to user code, and the 
 compiler pointed out where each change needed to be made.

 But we nearly lost a major client over it.

 We're past the point where we can break everyone's code. It's 
 going to cost us far, far more than we'll gain. (And you all 
 know that if we could do massive do-overs, I'd get rid of put's 
 auto-decode.)

 Instead, one can write:

    class C { final: ... }

 as a pattern, and everything in the class will be final. That 
 leaves the "but what if I want a single virtual function?" 
 There needs to be a way to locally turn off 'final'. Adding 
 'virtual' is one way to do that, but:

 1. there are other attributes we might wish to turn off, like 
 'pure' and 'nothrow'.

 2. it seems excessive to dedicate a keyword just for that.

 So, there's the solution that has been proposed before:

    !final
    !pure
    !nothrow
    etc.
I am still not convinced about final-by-default. However, discussions here have proven that in many cases people would benefit from it. So, how about we make a next major release branch, 2.1, appoint some 2.1 maintainers (Manu for an example, as he is one of the major forces behind this movement). These individuals will filter and merge changes in the 2.0 branch into the 2.1. This will nicely allow companies to have a transition period from 2.0.x to 2.1.x at the time of convenience. Kind regards
Mar 13 2014
prev sibling next sibling parent Iain Buclaw <ibuclaw gdcproject.org> writes:
On 12 Mar 2014 22:50, "Walter Bright" <newshound2 digitalmars.com> wrote:
 The argument for final by default, as eloquently expressed by Manu, is a
good one. Even Andrei agrees with it (!).
 The trouble, however, was illuminated most recently by the std.json
regression that broke existing code. The breakage wasn't even intentional; it was a mistake. The user fix was also simple, just a tweak here and there to user code, and the compiler pointed out where each change needed to be made.
 But we nearly lost a major client over it.
std.json isn't precisely on the list of modules that showcase Phobos in a good light. We'd be better off giving the module some TLC and update the API so it's consistent with the rest of Phobos in regards to coding standards and unittest coverage.
 We're past the point where we can break everyone's code. It's going to
cost us far, far more than we'll gain. (And you all know that if we could do massive do-overs, I'd get rid of put's auto-decode.)
 Instead, one can write:

    class C { final: ... }

 as a pattern, and everything in the class will be final. That leaves the
"but what if I want a single virtual function?" There needs to be a way to locally turn off 'final'. Adding 'virtual' is one way to do that, but:
 1. there are other attributes we might wish to turn off, like 'pure' and
'nothrow'.
 2. it seems excessive to dedicate a keyword just for that.

 So, there's the solution that has been proposed before:

    !final
    !pure
    !nothrow
    etc.
Yuck.
Mar 13 2014
prev sibling next sibling parent "ixid" <nuaccount gmail.com> writes:
On Wednesday, 12 March 2014 at 22:50:00 UTC, Walter Bright wrote:
 The argument for final by default, as eloquently expressed by 
 Manu, is a good one. Even Andrei agrees with it (!).

 The trouble, however, was illuminated most recently by the 
 std.json regression that broke existing code. The breakage 
 wasn't even intentional; it was a mistake. The user fix was 
 also simple, just a tweak here and there to user code, and the 
 compiler pointed out where each change needed to be made.

 But we nearly lost a major client over it.

 We're past the point where we can break everyone's code. It's 
 going to cost us far, far more than we'll gain. (And you all 
 know that if we could do massive do-overs, I'd get rid of put's 
 auto-decode.)

 Instead, one can write:

    class C { final: ... }

 as a pattern, and everything in the class will be final. That 
 leaves the "but what if I want a single virtual function?" 
 There needs to be a way to locally turn off 'final'. Adding 
 'virtual' is one way to do that, but:

 1. there are other attributes we might wish to turn off, like 
 'pure' and 'nothrow'.

 2. it seems excessive to dedicate a keyword just for that.

 So, there's the solution that has been proposed before:

    !final
    !pure
    !nothrow
    etc.
I have no horse in the final race but I would say be wary of becoming too conservative because a client over reacts to a regression. Their reaction sounds unreasonable, especially given how quickly the issue was addressed. Where did the narrative that it was a deliberate breaking change come from?
Mar 13 2014
prev sibling parent "Jesse Phillips" <Jesse.K.Phillips+D gmail.com> writes:
On Wednesday, 12 March 2014 at 22:50:00 UTC, Walter Bright wrote:
 We're past the point where we can break everyone's code. It's 
 going to cost us far, far more than we'll gain. (And you all 
 know that if we could do massive do-overs, I'd get rid of put's 
 auto-decode.)
I figure I'll throw in my thoughts (not to convince you of what is right). I don't have a bread and butter program in D, mainly just hobby. I don't agree with the choice behind making this flop. I was surprised to find out that the final-by-default had been approved to begin with, you were already aware of the desire to not "break everyone's code," pushing for it throughout discussions. But eventually something made you change your mind, and now a very different situation made you change it back. D2 shouldn't be arbitrarily frozen like was done with D1. There are things which need to be done and things which should be done, some of which aren't even known yet. It doesn't sound like the intention is to go this far, so I'm hoping this is common ground that some of this stuff will break a lot of code. But we are scared because D1 got the "stable" treatment and that meant things remained broken. D has had a history of random breaking changes (great work on killing most regressions before release), but planned/expected breaking changes still don't get the upgrade path they deserve. Daniele said it best, the code should compile with the current release and the next release, or more. Meaning old code complies with the new compiler, and changes to remove the use of deprecation should compile with the older compiler. (This allows local builds to update code for a new compiler while the automated builds continue to build the same code with older compiler) If we can do this, and even extend it out for longer than "next" release we're providing a good upgrade path. This will likely lead to pleasing the majority of users that need require stability, even if the change doesn't appear to be worth it (the current final-by-default may not be worth it from a breaking code point of view, but seems to be considered the "right thing").
 So, there's the solution that has been proposed before:

    !final
    !pure
    !nothrow
    etc.
I think this is ok, I'm also ok with the introduction of new keywords, preferably not, not_nothrow. !nothrow isn't great, but oh well.
Mar 13 2014