www.digitalmars.com         C & C++   DMDScript  

digitalmars.D - Vision for the D language - stabilizing complexity?

reply Andrew Godfrey <X y.com> writes:
This question is (I've just realized) the primary concern I have 
about the future of D (and hence whether it's worth depending on).

I looked at the 2015H1 vision, and don't see an answer to this 
there.

So, an example to illustrate the question: In a recent thread, I 
saw code that used an "alias parameter". I haven't seen this 
before. Or have I? I'm not really sure, because:
* "alias" is a keyword I've seen before.
* type inference (which I like in general), means that maybe this 
is the "formal" way to specify whatever it means, and people 
usually just leave it out.

Now, I'm not asking about making a breaking language change, and 
I'm not exactly complaining about new language features. I'm more 
thinking about when someone who knows all the current features, 
tries to read code: How hard is the language for that human to 
parse? The more different meanings a keyword has (consider 
"static"), and ditto for attributes, the harder it is to parse.

Sorry for the novel, but now I can ask my question: What is the D 
leadership's vision for how the language will evolve with respect 
to this metric (ease of parseability by a human already well 
versed in the latest version of the language)?

I ask because I see lots of discussions that seem to be proposing 
a change that will incrementally  increase this difficulty. Over 
time, that would significantly change the language, possibly 
approaching C++'s level of difficulty, which I'll call "many 
bridges too far". And C++ seems to have given up fighting this 
(e.g. I like the idea of the C++ "uniform initialization" and 
"initializer list" features, but the way their syntax interacts 
with old-school syntax is frustrating.)

Thanks!
Jul 07 2016
next sibling parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 7/7/16 12:23 PM, Andrew Godfrey wrote:
 What is the D leadership's vision for how the language will evolve with
 respect to this metric (ease of parseability by a human already well
 versed in the latest version of the language)?
Alias parameters have been around for a while. Generally it's not a feasible strategy to assign (or assume as reader) a single context-independent meaning to a keyword. We are always keeping a close watch on language complexity as we improve it. One thing we're looking increasingly into is shifting to the compiler the responsibility of ascribing qualifiers and attributes, thus reducing the need for writing them by humans. Walter has a few quite interesting recent ideas in the area, which may bear fruit soon. Andrei
Jul 07 2016
prev sibling parent reply deadalnix <deadalnix gmail.com> writes:
On Thursday, 7 July 2016 at 16:23:35 UTC, Andrew Godfrey wrote:
 So, an example to illustrate the question: In a recent thread, 
 I saw code that used an "alias parameter". I haven't seen this 
 before. Or have I? I'm not really sure, because:
 * "alias" is a keyword I've seen before.
 * type inference (which I like in general), means that maybe 
 this is the "formal" way to specify whatever it means, and 
 people usually just leave it out.
While I understand the sentiment, I don't think this example is a good one and Andrei answer only makes this more scary. alias as in alias parameter is the exact same thing as alias in other situations: make this name refers to that other thing. And here we are touching the problem: why do you expect alias to work in one place and do not expect it to work in other places ? The answer down bellow.
 I ask because I see lots of discussions that seem to be 
 proposing a change that will incrementally  increase this 
 difficulty. Over time, that would significantly change the 
 language, possibly approaching C++'s level of difficulty, which 
 I'll call "many bridges too far". And C++ seems to have given 
 up fighting this (e.g. I like the idea of the C++ "uniform 
 initialization" and "initializer list" features, but the way 
 their syntax interacts with old-school syntax is frustrating.)
While this very true, it is clear that most D's complexity doesn't come from there. D's complexity come for the most part from things being completely unprincipled and lack of vision. Let me get back to your alias example. alias does one thing: give a new name to some existing thing. For instance, after doing: alias foo = bar; I can use foo as an identifier and it will refer to bar. Now as to parameters. Parameters are like declarations, but the value is provided in the form of an argument. For instance: void foo(int a) {} // a is a parameter. foo(3); // 3 is an argument. In that specific instance, we conclude that within foo, it is as if we declared int a = 3 (in that specific case). The concept of alias and the concept of parameter/argument are completely orthogonal, and therefore, there should be no expectation that anything special is going on. So, in template Foo(alias A) {} Foo!bar; Things should be as if, for this specific instance, within Foo, we specified alias A = bar; Except that it is not the case. D fucks up orthogonality everytime it has the opportunity. As a result, there is a ton of useless knowledge that has to be accumulated. For instance : alias A = int; // Nope template Foo(alias A) {} Foo!int; // OK ! In general, things are so unprincipled that there is no expectation that they are anymore. For instance, you'd expect that template Foo(T...) {} would take a variadic number of type as arguments, while template Foo(alias T...) {} would take a variadic number of aliases. But no, 1 take a variadic number of aliases and 2/ is invalid. So now we've created a situation where it is now impossible to define variadic, alias and parameter/argument as 3 simple, separate things, but as a whole blob of arbitrary decisions. This turtles down the the most simple thing: enum E { A = 1, B = 2 } E bazinga = A | B; final switch (bazinga) { case A: ... case B: ... } // Enjoy ! And so on, safe only mean safe if you do not do this and that, DIP25 is ready to add a new pack of brick to the already tanking Jenga tower. Shared doesn't work and Walter promote use of undefined behavior to work around them. In fact, when it comes to news feature, more than the added complexity of the feature itself, it it's interaction with existing madness that worries me. Not only there is an explosion of special cases that is completely untractable, but it also cement the existing madness.
Jul 07 2016
next sibling parent reply Andrew Godfrey <X y.com> writes:
 Generally it's not a feasible strategy to assign (or assume as 
 reader) a single context-independent meaning to a keyword.
That may be overstating it, yes. But I am looking here for a positive statement about what kind of addition is "beyond the pale". For example, in C++, "enum class" uses two existing keywords in a new way, but it is acceptable (although ugly as heck), because the resulting thing is 'like' an enum. OTOH, C++'s use of "static" is unacceptable, because it mixes very distinct ideas (file scope, linkage, and instancing in a class) and the context you have to look at to distinguish them is sometimes far away from the line you are looking at. I haven't really noticed this problem much with D, but I worry about the future, because I see a refusal to introduce new keywords, combined with an eagerness to introduce new language concepts. Surely a compelling enough new language concept, would justify needing to provide a migration tool to help codebases migrate to the new compiler? Another example is "return" used for monads in eg Haskell - even if it only has one meaning in Haskell, it is too mixed up with a different meaning in other common languages. D's "static if" - which is a killer feature if I ignore the keyword - gives me a similar feeling (though it's much less egregious than "return" in monads). "static" is a terribly non-descriptive name because there are so many senses in which a thing could be "dynamic". What we mean in this case is "compile-time". I think!
Jul 07 2016
next sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 07/07/2016 10:25 PM, Andrew Godfrey wrote:
 D's "static if" - which is a killer feature if I ignore the keyword -
 gives me a similar feeling (though it's much less egregious than
 "return" in monads). "static" is a terribly non-descriptive name because
 there are so many senses in which a thing could be "dynamic".
You may well be literally the only person on Earth who dislikes the use of "static" in "static if". -- Andrei
Jul 08 2016
next sibling parent reply jmh530 <john.michael.hall gmail.com> writes:
On Friday, 8 July 2016 at 18:16:03 UTC, Andrei Alexandrescu wrote:
 You may well be literally the only person on Earth who dislikes 
 the use of "static" in "static if". -- Andrei
You have to admit that static is used in a lot of different places in D. It doesn't always mean something like compile-time either. For instance, a static member function is not a compile time member function. However, I doubt something like this is going to change, so it doesn't really bother me. I liked the way that the Sparrow language (from the presentation you posted a few weeks ago) did it. Instead of static if, they use if[ct].
Jul 08 2016
next sibling parent reply Stefan Koch <uplink.coder googlemail.com> writes:
On Friday, 8 July 2016 at 19:43:39 UTC, jmh530 wrote:
 On Friday, 8 July 2016 at 18:16:03 UTC, Andrei Alexandrescu 
 wrote:
 You may well be literally the only person on Earth who 
 dislikes the use of "static" in "static if". -- Andrei
You have to admit that static is used in a lot of different places in D. It doesn't always mean something like compile-time either. For instance, a static member function is not a compile time member function. However, I doubt something like this is going to change, so it doesn't really bother me. I liked the way that the Sparrow language (from the presentation you posted a few weeks ago) did it. Instead of static if, they use if[ct].
I like static if :)
Jul 08 2016
parent reply "H. S. Teoh via Digitalmars-d" <digitalmars-d puremagic.com> writes:
On Fri, Jul 08, 2016 at 08:01:10PM +0000, Stefan Koch via Digitalmars-d wrote:
 On Friday, 8 July 2016 at 19:43:39 UTC, jmh530 wrote:
 On Friday, 8 July 2016 at 18:16:03 UTC, Andrei Alexandrescu wrote:
 
 You may well be literally the only person on Earth who dislikes
 the use of "static" in "static if". -- Andrei
You have to admit that static is used in a lot of different places in D. It doesn't always mean something like compile-time either. For instance, a static member function is not a compile time member function. However, I doubt something like this is going to change, so it doesn't really bother me. I liked the way that the Sparrow language (from the presentation you posted a few weeks ago) did it. Instead of static if, they use if[ct].
I like static if :)
I like static if too. I think if[ct] is more awkward to type, even though it's fewer characters. But yeah, D *has* overloaded the "static" keyword perhaps a little more than it ought to have. But at the end of the day it's just syntax... there are far more pressing issues to worry about than syntax at the moment. T -- If Java had true garbage collection, most programs would delete themselves upon execution. -- Robert Sewell
Jul 08 2016
parent Andrew Godfrey <X y.com> writes:
On Friday, 8 July 2016 at 20:11:11 UTC, H. S. Teoh wrote:

 But yeah, D *has* overloaded the "static" keyword perhaps a 
 little more than it ought to have.  But at the end of the day 
 it's just syntax... there are far more pressing issues to worry 
 about than syntax at the moment.
 T
Okay, so now you are illustrating the *exact* problem I am trying to point out with this thread: Without trying to undo the mistakes of the past, could we please have a link (in the vision doc) to a long-term language-design vision, so that potential adopters know what to expect in 5 years or 10 years? If by then, D will be as unwieldy as C++ is now, then it isn't the improvement over C++ that it currently appears to be. "More pressing issues" is what the current vision doc is about, and I'm not suggesting substantial changes to it. Except for the time it may take the leadership to write down their long term intentions and - possibly as an outcome of that - to resolve their differences. I also think it could increase efficiency in the forums; any language proposal which violates the long term vision could be referred to that doc instead of clumsily exploring little bits of it.
Jul 09 2016
prev sibling parent DLearner <bmqazwsx123 gmail.com> writes:
On Friday, 8 July 2016 at 19:43:39 UTC, jmh530 wrote:
 On Friday, 8 July 2016 at 18:16:03 UTC, Andrei Alexandrescu 
 wrote:
 You may well be literally the only person on Earth who 
 dislikes the use of "static" in "static if". -- Andrei
You have to admit that static is used in a lot of different places in D. It doesn't always mean something like compile-time either. For instance, a static member function is not a compile time member function. However, I doubt something like this is going to change, so it doesn't really bother me. I liked the way that the Sparrow language (from the presentation you posted a few weeks ago) did it. Instead of static if, they use if[ct].
I think it is a serious mistake to use the same word for different concepts. In the case of 'static', the problem is that it started out meaning 'as at, or pertaining to, compile time', and then got additional meanings. Therefore, suggest we change the keyword 'static', as used for compile time, to 'ctime'.
Jul 10 2016
prev sibling parent reply Andrew Godfrey <X y.com> writes:
On Friday, 8 July 2016 at 18:16:03 UTC, Andrei Alexandrescu wrote:
 On 07/07/2016 10:25 PM, Andrew Godfrey wrote:
 D's "static if" - which is a killer feature if I ignore the 
 keyword -
 gives me a similar feeling (though it's much less egregious 
 than
 "return" in monads). "static" is a terribly non-descriptive 
 name because
 there are so many senses in which a thing could be "dynamic".
You may well be literally the only person on Earth who dislikes the use of "static" in "static if". -- Andrei
Aha! But I don't! It feels intuitive, possibly the best use of "static". But that is immaterial, what matters is the sum of all meanings of "static" in this language. The "single instance per class" meaning of "static" is just bonkers. I've had that meaning burned into my brain for a couple of decades, from C++. But I don't have to like it! I could stomach it, though, if that was the only use of the keyword. (Or if the other meanings couldn't be used in the same contexts).
Jul 08 2016
next sibling parent reply Max Samukha <maxsamukha gmail.com> writes:
On Saturday, 9 July 2016 at 04:32:25 UTC, Andrew Godfrey wrote:

 Aha! But I don't! It feels intuitive, possibly the best use of 
 "static". But that is immaterial, what matters is the sum of 
 all meanings of "static" in this language. The "single instance 
 per class" meaning of "static" is just bonkers. I've had that 
 meaning burned into my brain for a couple of decades, from C++. 
 But I don't have to like it!
 I could stomach it, though, if that was the only use of the 
 keyword. (Or if the other meanings couldn't be used in the same 
 contexts).
The name is fine. It comes from 'statically bound/dispatched', that is 'resolved at compile time'.
Jul 08 2016
parent reply Andrew Godfrey <X y.com> writes:
On Saturday, 9 July 2016 at 06:31:01 UTC, Max Samukha wrote:
 On Saturday, 9 July 2016 at 04:32:25 UTC, Andrew Godfrey wrote:

 Aha! But I don't! It feels intuitive, possibly the best use of 
 "static". But that is immaterial, what matters is the sum of 
 all meanings of "static" in this language. The "single 
 instance per class" meaning of "static" is just bonkers. I've 
 had that meaning burned into my brain for a couple of decades, 
 from C++. But I don't have to like it!
 I could stomach it, though, if that was the only use of the 
 keyword. (Or if the other meanings couldn't be used in the 
 same contexts).
The name is fine. It comes from 'statically bound/dispatched', that is 'resolved at compile time'.
This is a tangent from the subject of this thread, but: No, that just says how it is implemented, not what it means / intends. See "the 7 stages of naming", here: http://arlobelshee.com/good-naming-is-a-process-not-a-single-step/ (That resource is talking about identifier naming, not keywords. But it applies anyway.)
Jul 09 2016
parent reply Max Samukha <maxsamukha gmail.com> writes:
On Saturday, 9 July 2016 at 14:58:55 UTC, Andrew Godfrey wrote:
 On Saturday, 9 July 2016 at 06:31:01 UTC, Max Samukha wrote:
 On Saturday, 9 July 2016 at 04:32:25 UTC, Andrew Godfrey wrote:
 This is a tangent from the subject of this thread, but: No, 
 that just says how it is implemented, not what it means / 
 intends. See "the 7 stages of naming", here: 
 http://arlobelshee.com/good-naming-is-a-process-not-a-single-step/

 (That resource is talking about identifier naming, not 
 keywords. But it applies anyway.)
You have a point, but the name is still not 'just bonkers', all things considered. Metonymy is justified in many cases, and I think this is one of them. What better name would you propose?
Jul 09 2016
next sibling parent reply Seb <seb wilzba.ch> writes:
On Saturday, 9 July 2016 at 16:38:02 UTC, Max Samukha wrote:
 On Saturday, 9 July 2016 at 14:58:55 UTC, Andrew Godfrey wrote:
 On Saturday, 9 July 2016 at 06:31:01 UTC, Max Samukha wrote:
 On Saturday, 9 July 2016 at 04:32:25 UTC, Andrew Godfrey 
 wrote:
 This is a tangent from the subject of this thread, but: No, 
 that just says how it is implemented, not what it means / 
 intends. See "the 7 stages of naming", here: 
 http://arlobelshee.com/good-naming-is-a-process-not-a-single-step/

 (That resource is talking about identifier naming, not 
 keywords. But it applies anyway.)
You have a point, but the name is still not 'just bonkers', all things considered. Metonymy is justified in many cases, and I think this is one of them. What better name would you propose?
I agree that overloading keywords in different contexts in problematic. I think every newbie is surprised when he stumbled across the two different usages of enum (finite, custom lists & CT evaluation), but let's focus on the future. Something that's worrying me a bit, is that we don't have a clear naming convention for Phobos. There is a good wiki entry that shows the problem [1]. Basically an intuitive name should follow a standard convention, s.t. you can "guess" it and the name can also tell more information, e.g. is it a lazy operation? (aka returns a range). `split` and `splitter` are good examples, but then in other module you might find (1) adjectives: `transposed`, `indexed` (2) prepositions: byUTF, or (3) just nouns: setUnion, cartesianProduct, permutations, recurrence. Disclaimer: This is just a friendly reminder that names are important and as they are very hard to change, great care should be put on choosing them in the future ;-) [1] http://wiki.dlang.org/Naming_conventions
Jul 09 2016
parent ag0aep6g <anonymous example.com> writes:
On 07/09/2016 07:09 PM, Seb wrote:
 I agree that overloading keywords in different contexts in problematic.
 I think every newbie is surprised when he stumbled across the two
 different usages of enum (finite, custom lists & CT evaluation),
`enum e = 1;` can be seen as a shorthand for `enum {e = 1}`. Makes perfect sense then. Though I wouldn't be surprised if there are actually subtle differences between the two.
Jul 09 2016
prev sibling parent reply Andrew Godfrey <X y.com> writes:
On Saturday, 9 July 2016 at 16:38:02 UTC, Max Samukha wrote:
 On Saturday, 9 July 2016 at 14:58:55 UTC, Andrew Godfrey wrote:
 On Saturday, 9 July 2016 at 06:31:01 UTC, Max Samukha wrote:
 On Saturday, 9 July 2016 at 04:32:25 UTC, Andrew Godfrey 
 wrote:
 This is a tangent from the subject of this thread, but: No, 
 that just says how it is implemented, not what it means / 
 intends. See "the 7 stages of naming", here: 
 http://arlobelshee.com/good-naming-is-a-process-not-a-single-step/

 (That resource is talking about identifier naming, not 
 keywords. But it applies anyway.)
You have a point, but the name is still not 'just bonkers', all things considered. Metonymy is justified in many cases, and I think this is one of them. What better name would you propose?
First, I'm not proposing a change to existing keywords, I'm using existing examples to talk about future language changes. Second, I had to look up "metonymy" in Wikipedia. Using its example: Suppose "Hollywood" referred to both the LA movie industry and, say, the jewelry industry; that's roughly equivalent to the pattern I'm talking about. Others in this thread have suggested alternatives, many of those have things to criticize, but I would prefer something cryptic over something that has multiple subtly-different meanings in the language. I'm drawn to "#if", except people might end up thinking D has a macro preprocessor. "ifct" seems fine except I'm not sure everyone would agree how to pronounce it. Compile-time context seems significant enough that maybe it warrants punctuation, like "*if" or "$if". I especially want to establish: If we were adding a new feature as significant as "static if", and we decided a keyword was better than punctuation, could we stomach the cost of making a new keyword, or would we shoehorn it either into one of the existing keywords unused in that context, or start talking about using attributes? I have a lot of experience with backward-compatibility but I still don't understand the reticence to introduce new keywords (assuming a freely available migration tool).
Jul 09 2016
parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 7/9/16 6:58 PM, Andrew Godfrey wrote:
 On Saturday, 9 July 2016 at 16:38:02 UTC, Max Samukha wrote:
 On Saturday, 9 July 2016 at 14:58:55 UTC, Andrew Godfrey wrote:
 On Saturday, 9 July 2016 at 06:31:01 UTC, Max Samukha wrote:
 On Saturday, 9 July 2016 at 04:32:25 UTC, Andrew Godfrey wrote:
 This is a tangent from the subject of this thread, but: No, that just
 says how it is implemented, not what it means / intends. See "the 7
 stages of naming", here:
 http://arlobelshee.com/good-naming-is-a-process-not-a-single-step/

 (That resource is talking about identifier naming, not keywords. But
 it applies anyway.)
You have a point, but the name is still not 'just bonkers', all things considered. Metonymy is justified in many cases, and I think this is one of them. What better name would you propose?
First, I'm not proposing a change to existing keywords, I'm using existing examples to talk about future language changes. Second, I had to look up "metonymy" in Wikipedia. Using its example: Suppose "Hollywood" referred to both the LA movie industry and, say, the jewelry industry; that's roughly equivalent to the pattern I'm talking about.
Way ahead of ya. The average English noun has 7.8 meanings, and the average verb has 12.
 Others in this thread have suggested alternatives, many of those have
 things to criticize, but I would prefer something cryptic over something
 that has multiple subtly-different meanings in the language.
 I'm drawn to "#if", except people might end up thinking D has a macro
 preprocessor. "ifct" seems fine except I'm not sure everyone would agree
 how to pronounce it. Compile-time context seems significant enough that
 maybe it warrants punctuation, like "*if" or "$if".
No. As an aside I see your point but "static if" is the worst example to support it, by a mile.
 I especially want to establish: If we were adding a new feature as
 significant as "static if", and we decided a keyword was better than
 punctuation, could we stomach the cost of making a new keyword, or would
 we shoehorn it either into one of the existing keywords unused in that
 context, or start talking about using attributes? I have a lot of
 experience with backward-compatibility but I still don't understand the
 reticence to introduce new keywords (assuming a freely available
 migration tool).
It just depends. There is no rigid strategy here. Worrying about the hypothetical possibility seems unnecessary. Andrei
Jul 09 2016
prev sibling parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 07/09/2016 12:32 AM, Andrew Godfrey wrote:
 On Friday, 8 July 2016 at 18:16:03 UTC, Andrei Alexandrescu wrote:
 On 07/07/2016 10:25 PM, Andrew Godfrey wrote:
 D's "static if" - which is a killer feature if I ignore the keyword -
 gives me a similar feeling (though it's much less egregious than
 "return" in monads). "static" is a terribly non-descriptive name because
 there are so many senses in which a thing could be "dynamic".
You may well be literally the only person on Earth who dislikes the use of "static" in "static if". -- Andrei
Aha! But I don't!
Great to hear you don't dislike it! :o) -- Andrei
Jul 09 2016
prev sibling next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 7/7/2016 7:25 PM, Andrew Godfrey wrote:
 "static" is a terribly non-descriptive name
That's why it's the go-to keyword for any functionality we can't think of a good name for, or if the name would be too long such as "launch_nucular_missiles".
Jul 08 2016
parent reply Observer <here inter.net> writes:
On Friday, 8 July 2016 at 20:57:39 UTC, Walter Bright wrote:
 On 7/7/2016 7:25 PM, Andrew Godfrey wrote:
 "static" is a terribly non-descriptive name
That's why it's the go-to keyword for any functionality we can't think of a good name for, or if the name would be too long such as "launch_nucular_missiles".
For a moment I thought Walter was being comical. But then I looked at the Index in TDPL, and I see static class constructor, static class destructor, static if, static import, static this, and just plain static. Also, Andrei, if you're listening, I've spotted another TDPL errata. On page 459, the Index entry for "static, obligatory joke about overuse of" lists page 345, but in fact the joke is in the footnote at the bottom of page 68. As for me, the main thing I dislike about static if is that it blends in visually a bit too well with run-time code segments. C's #if structure has its own problems, but I like the distinctiveness. An earlier comment about wanting a different name got me to thinking. For naming variables, I own two copies of a high-quality thesaurus. One copy I keep at work, one copy I keep at home. It's invaluable when you get stuck at naming things. Why not apply that same tool to naming keywords as well? So I looked. I didn't see anything precisely on target; maybe these come closest: constant if durable if persistent if adamant if unalterable if immutable if Okay, that last one is a joke, considering that we're talking about keyword overloading. But the effort did spark some other brain cells to fire. So we could have had any of these: exactly if strictly if only if I do like the creative use of an adverb instead of an adjective in these choices; the code reads like standard English instead of a clunky made-up phrase. I also especially like the briefness and precision of "only if", and that may become my favorite way to think about this in the future. (Is there some way I can "#define only static" to get this effect?) In fact, it is presaged on page 48 of TDPL, from whence I quote: "the basic plot is simple -- static if evaluates a compile-time expression and compiles the controlled statement only if the expression is true". So you the language designers had the idea in hand, but then sadly overlooked it.
Jul 09 2016
next sibling parent Walter Bright <newshound2 digitalmars.com> writes:
On 7/9/2016 1:57 AM, Observer wrote:
 As for me, the main thing I dislike about static if is that it blends in
 visually a bit too well with run-time code segments.  C's #if structure
 has its own problems, but I like the distinctiveness.
Ironically, "static if" has entered the C++ lexicon from D.
Jul 09 2016
prev sibling next sibling parent reply burjui <bytefu gmail.com> writes:
On Saturday, 9 July 2016 at 08:57:18 UTC, Observer wrote:
     constant if
     durable if
     persistent if
     adamant if
     unalterable if
     immutable if

 Okay, that last one is a joke, considering that we're talking 
 about keyword overloading. But the effort did spark some other 
 brain cells to fire. So we could have had any of these:

     exactly if
     strictly if
     only if
I'm sorry, but these examples are horrible, except maybe "constant if", because none give a clue about compile-time and they are not even synonyms. The last three are just plain nonsense, especially "strictly if" which implies that ordinary "if" is somehow not reliable. You didn't even think about it, just picked the words from a book. "static if" is perfectly fine, if you just try to imagine what in "if" could be dynamic, because the only meaningful answer is: "The condition". If there is a context where "static" really needs to be replaced by a synonym, it's definitely not "static if".
Jul 09 2016
parent Observer <here inter.net> writes:
On Saturday, 9 July 2016 at 11:49:49 UTC, burjui wrote:
 I'm sorry, but these examples are horrible, except maybe 
 "constant if", because none give a clue about compile-time and 
 they are not even synonyms. ... You didn't even think about it, 
 just picked the words from a book.
It's a process. One comes up with an idea, mulls it over, tries it out, evaluates, revises. I'm not saying that any one of these choices is definitely better. What I'm saying is that someone claimed earlier that "static" wasn't a good choice, but gave no examples of possible alternatives, or how to find one. The point is, an attempt to follow a process that often yields good results for variable naming seems to not give great results in this case, which I suppose argues for the status quo.
Jul 09 2016
prev sibling parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 07/09/2016 04:57 AM, Observer wrote:
 Also, Andrei, if you're listening, I've spotted another TDPL errata.
 On page 459, the Index entry for "static, obligatory joke about overuse
 of" lists page 345, but in fact the joke is in the footnote at the bottom
 of page 68.
Added to http://erdani.com/tdpl/errata. Thanks! -- Andrei
Jul 09 2016
prev sibling parent reply Timon Gehr <timon.gehr gmx.ch> writes:
On 08.07.2016 04:25, Andrew Godfrey wrote:
 Another example is "return" used for monads in eg Haskell - even if it
 only has one meaning in Haskell, it is too mixed up with a different
 meaning in other common languages. D's "static if" - which is a killer
 feature if I ignore the keyword - gives me a similar feeling (though
 it's much less egregious than "return" in monads).
'return' in Haskell is perfectly fine.
Jul 08 2016
parent reply Andrew Godfrey <X y.com> writes:
On Friday, 8 July 2016 at 21:23:24 UTC, Timon Gehr wrote:
 On 08.07.2016 04:25, Andrew Godfrey wrote:
 Another example is "return" used for monads in eg Haskell - 
 even if it
 only has one meaning in Haskell, it is too mixed up with a 
 different
 meaning in other common languages. D's "static if" - which is 
 a killer
 feature if I ignore the keyword - gives me a similar feeling 
 (though
 it's much less egregious than "return" in monads).
'return' in Haskell is perfectly fine.
This (long) talk does a good job of explaining the problem with using the name 'return' in monads. https://www.infoq.com/presentations/functional-pros-cons#downloadPdf Others have said it shorter. I took this example because it crosses languages. Of course we can't avoid clashing with other languages, there are only so many keywords to use. But there's definitely a principle here worth considering, that is if you care about D adoption. think?)
Jul 08 2016
parent reply Timon Gehr <timon.gehr gmx.ch> writes:
On 09.07.2016 06:39, Andrew Godfrey wrote:
 On Friday, 8 July 2016 at 21:23:24 UTC, Timon Gehr wrote:
 On 08.07.2016 04:25, Andrew Godfrey wrote:
 Another example is "return" used for monads in eg Haskell - even if it
 only has one meaning in Haskell, it is too mixed up with a different
 meaning in other common languages. D's "static if" - which is a killer
 feature if I ignore the keyword - gives me a similar feeling (though
 it's much less egregious than "return" in monads).
'return' in Haskell is perfectly fine.
This (long) talk does a good job of explaining the problem with using the name 'return' in monads. https://www.infoq.com/presentations/functional-pros-cons#downloadPdf ...
The reason you linked to this (long) talk instead of a more digestible source is that the presenter manages to bring across his flawed argumentation in a way that is charismatic enough to fool a biased audience. It's a reasonable name. 'return' creates a computation that returns the given value. This is a different corner in language design space, why should C constrain Haskell's design in any way?
 Others have said it shorter.
Thanks for providing the links to that material.
 I took this example because it crosses
 languages. Of course we can't avoid clashing with other languages, there
 are only so many keywords to use. But there's definitely a principle
 here worth considering, that is if you care about D adoption.
 ...
I was complaining about the cheap shot at Haskell. This has become way too fashionable.

That's a better example.
Jul 09 2016
parent Andrew Godfrey <X y.com> writes:
On Saturday, 9 July 2016 at 22:20:22 UTC, Timon Gehr wrote:
 On 09.07.2016 06:39, Andrew Godfrey wrote:
 On Friday, 8 July 2016 at 21:23:24 UTC, Timon Gehr wrote:
 On 08.07.2016 04:25, Andrew Godfrey wrote:
 Another example is "return" used for monads in eg Haskell - 
 even if it
 only has one meaning in Haskell, it is too mixed up with a 
 different
 meaning in other common languages. D's "static if" - which 
 is a killer
 feature if I ignore the keyword - gives me a similar feeling 
 (though
 it's much less egregious than "return" in monads).
'return' in Haskell is perfectly fine.
This (long) talk does a good job of explaining the problem with using the name 'return' in monads. https://www.infoq.com/presentations/functional-pros-cons#downloadPdf ...
The reason you linked to this (long) talk instead of a more digestible source is that the presenter manages to bring across his flawed argumentation in a way that is charismatic enough to fool a biased audience. It's a reasonable name. 'return' creates a computation that returns the given value. This is a different corner in language design space, why should C constrain Haskell's design in any way?
 Others have said it shorter.
Thanks for providing the links to that material.
 I took this example because it crosses
 languages. Of course we can't avoid clashing with other 
 languages, there
 are only so many keywords to use. But there's definitely a 
 principle
 here worth considering, that is if you care about D adoption.
 ...
I was complaining about the cheap shot at Haskell. This has become way too fashionable.
Sorry I chose such a charged example. It's not a cheap shot in my case, I have waded through a number of "monad tutorials" and criticisms of monad tutorials, and that talk summed up my experience. But I really can't claim that monads would be easy to learn if they used better naming. Easier maybe, but they could still be difficult. Actually maybe Haskell is more relevant as an example to talk about "overreaching features" - like Haskell's lazy evaluation. In D, GC, auto-decode, and dynamic arrays have overreached IMO. But I'm hard pressed to think of something to write in a long-term vision about that. (E.g. I like how Phobos is adopting ranges. Wouldn't want to slow that down. But maybe we're blind to the downsides!) So... you're right, I have no useful suggestions to make after beating on Haskell. :)
Jul 10 2016
prev sibling next sibling parent reply ag0aep6g <anonymous example.com> writes:
On 07/08/2016 02:56 AM, deadalnix wrote:
 alias A = int; // Nope
 template Foo(alias A) {}
 Foo!int; // OK !
I think you've got "Nope" and "OK" mixed up there. [...]
 And so on,  safe only mean safe if you do not do this and that,
As far as I'm aware, the dictatorship agrees that the holes in safe are bugs that need fixing.
Jul 07 2016
parent reply deadalnix <deadalnix gmail.com> writes:
On Friday, 8 July 2016 at 05:26:44 UTC, ag0aep6g wrote:
 And so on,  safe only mean safe if you do not do this and that,
As far as I'm aware, the dictatorship agrees that the holes in safe are bugs that need fixing.
That's a completely meaningless statement, plus overall the dictatorship position is completely inconsistent. It is meaningless because sometime, you have A and B that are both safe on their own, but doing both is unsafe. In which case A or B need to be banned, but nothing allows to know which one. This isn't a bug, this is a failure to have a principled approach to safety. The position is inconsistent because the dictatorship refuses to compromise on mutually exclusive goals. For instance, safe is defined as ensuring memory safety. But not against undefined behaviors (in fact Walter promote the use of UB in various situations, for instance when it comes to shared). You CANNOT have undefined behavior that are defined as being memory safe.
Jul 08 2016
next sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 07/08/2016 02:42 PM, deadalnix wrote:
 It is meaningless because sometime, you have A and B that are both safe
 on their own, but doing both is unsafe. In which case A or B need to be
 banned, but nothing allows to know which one. This isn't a bug, this is
 a failure to have a principled approach to safety.
What would be a good example? Is there a bug report for it?
 The position is inconsistent because the dictatorship refuses to
 compromise on mutually exclusive goals. For instance,  safe is defined
 as ensuring memory safety. But not against undefined behaviors (in fact
 Walter promote the use of UB in various situations, for instance when it
 comes to shared). You CANNOT have undefined behavior that are defined as
 being memory safe.
I agree with that. What would be a good example? Where is the reference to Walter's promotion of UB in safe code? Andrei
Jul 08 2016
next sibling parent reply Timon Gehr <timon.gehr gmx.ch> writes:
On 08.07.2016 21:26, Andrei Alexandrescu wrote:
 Where is the reference to Walter's promotion of UB in  safe code?
Only found this, but IIRC, there was another discussion: http://www.digitalmars.com/d/archives/digitalmars/D/C_compiler_vs_D_compiler_272670.html#N272689
Jul 08 2016
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 7/8/2016 2:33 PM, Timon Gehr wrote:
 On 08.07.2016 21:26, Andrei Alexandrescu wrote:
 Where is the reference to Walter's promotion of UB in  safe code?
Only found this, but IIRC, there was another discussion: http://www.digitalmars.com/d/archives/digitalmars/D/C_compiler_vs_D_compiler_272670.html#N272689
I don't agree with the notion that all UB's can lead to memory corruption. deadalix's hypothetical fails because "proving it always passes" cannot be done at the same time as "remove this code path because it is undefined". I don't agree with the interpretation of UB in C++ that some C++ compiler authors do for that reason.
Jul 08 2016
parent reply Timon Gehr <timon.gehr gmx.ch> writes:
On 09.07.2016 02:26, Walter Bright wrote:
 On 7/8/2016 2:33 PM, Timon Gehr wrote:
 On 08.07.2016 21:26, Andrei Alexandrescu wrote:
 Where is the reference to Walter's promotion of UB in  safe code?
Only found this, but IIRC, there was another discussion: http://www.digitalmars.com/d/archives/digitalmars/D/C_compiler_vs_D_compiler_272670.html#N272689
I don't agree with the notion that all UB's can lead to memory corruption. deadalix's hypothetical fails because "proving it always passes" cannot be done at the same time as "remove this code path because it is undefined". ...
It's not the same branch. The code path that is removed ensures that the other branch always passes. Anyway, deadalnix was just illustrating how a compiler might introduce memory corruption in practice. The specification should not /allow/ the compiler to do so in the first place. safe is checked in the front end and UB is exploited by the back end. The front end needs to be independent of the back end. Using the standard definitions of terms, any UB that makes it into the back end is allowed to introduce memory corruption -- the front end cannot know, so how can it verify it does not happen?
 I don't agree with the interpretation of UB in C++ that some C++
 compiler authors do ...
Undefined behaviour means the language semantics don't define a successor state for a computation that has not terminated. Do you agree with that definition? If not, what /is/ UB in D, and why is it called UB?
Jul 09 2016
next sibling parent Timon Gehr <timon.gehr gmx.ch> writes:
On 10.07.2016 00:36, Timon Gehr wrote:
 the language semantics don't
*doesn't
Jul 09 2016
prev sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 07/09/2016 06:36 PM, Timon Gehr wrote:
 Undefined behaviour means the language semantics don't define a
 successor state for a computation that has not terminated. Do you agree
 with that definition? If not, what /is/ UB in D, and why is it called UB?
Yah, I was joking with Walter that effectively the moment you define undefined behavior it's not undefined any longer :o). It happens to the best of us. I think we're all aligned here. There's some interesting interaction here. Consider: int fun(int x) { int[10] y; ... return ++y[9 >> x]; } Now, under the "shift by negative numbers is undefined" rule, the compiler is free to eliminate the bounds check from the indexing because it's always within bounds for all defined programs. If it isn't, memory corruption may ensue. However, if the compiler says "shift by negative numbers is implementation-specified", the the compiler cannot portably eliminate the bounds check. It's a nice example illustrating how things that seem to have nothing with memory corruption do effect it. Andrei
Jul 09 2016
parent reply "H. S. Teoh via Digitalmars-d" <digitalmars-d puremagic.com> writes:
On Sat, Jul 09, 2016 at 07:17:59PM -0400, Andrei Alexandrescu via Digitalmars-d
wrote:
 On 07/09/2016 06:36 PM, Timon Gehr wrote:
 Undefined behaviour means the language semantics don't define a
 successor state for a computation that has not terminated. Do you
 agree with that definition? If not, what /is/ UB in D, and why is it
 called UB?
Yah, I was joking with Walter that effectively the moment you define undefined behavior it's not undefined any longer :o). It happens to the best of us. I think we're all aligned here. There's some interesting interaction here. Consider: int fun(int x) { int[10] y; ... return ++y[9 >> x]; } Now, under the "shift by negative numbers is undefined" rule, the compiler is free to eliminate the bounds check from the indexing because it's always within bounds for all defined programs. If it isn't, memory corruption may ensue. However, if the compiler says "shift by negative numbers is implementation-specified", the the compiler cannot portably eliminate the bounds check.
I find this rather disturbing, actually. There is a fine line between taking advantage of assert's to elide stuff that the programmer promises will not happen, and eliding something that's defined to be UB and thereby resulting in memory corruption. In the above example, I'd be OK with the compiler eliding the bounds check if there an assert(x >= 0) either in the function body or in the in-contract. Having the compiler elide the bounds check without any assert or any other indication that the programmer has made assurances that UB won't occur is very scary to me, as plain ole carelessness can easily lead to exploitable security holes. I hope D doesn't become an example of this kind of security hole. At the very least, I'd expect the compiler to warn that the function argument may cause UB, and suggest that an in-contract or assert be added. On a more technical note, I think eliding the bounds check on the grounds that shifting by negative x is UB is based on a fallacy. Eliding a bounds check should only be done when the compiler has the assurance that the bounds check is not needed. Just because a particular construct is UB does not meet this condition, because, being UB, there is no way to tell if the bounds check is needed or not, therefore the correct behaviour IMO is to leave the bounds check in. The elision should only happen if the compiler is assured that it's actually not needed. To elide simply because negative x is UB basically amounts to saying "the programmer ought to know better than writing UB code, so therefore let's just assume that the programmer never makes a mistake and barge ahead fearlessly FTW!". We all know where blind trust in programmer reliability leads: security holes galore because humans make mistakes. Assuming humans don't make mistakes, which is what this kind of exploitation of UB essentially boils down to, leads to madness.
 It's a nice example illustrating how things that seem to have nothing
 with memory corruption do effect it.
[...] T -- Stop staring at me like that! It's offens... no, you'll hurt your eyes!
Jul 09 2016
next sibling parent Observer <here inter.net> writes:
On Saturday, 9 July 2016 at 23:44:07 UTC, H. S. Teoh wrote:
 On a more technical note, I think eliding the bounds check on 
 the grounds that shifting by negative x is UB is based on a 
 fallacy. Eliding a bounds check should only be done when the 
 compiler has the assurance that the bounds check is not needed. 
 Just because a particular construct is UB does not meet this 
 condition, because, being UB, there is no way to tell if the 
 bounds check is needed or not, therefore the correct behaviour 
 IMO is to leave the bounds check in. The elision should only 
 happen if the compiler is assured that it's actually not needed.

 To elide simply because negative x is UB basically amounts to 
 saying "the programmer ought to know better than writing UB 
 code, so therefore let's just assume that the programmer never 
 makes a mistake and barge ahead fearlessly FTW!". We all know 
 where blind trust in programmer reliability leads: security 
 holes galore because humans make mistakes. Assuming humans 
 don't make mistakes, which is what this kind of exploitation of 
 UB essentially boils down to, leads to madness.
There is also a huge practical benefit in leaving such checks in the code. I've worked a lot in Perl over the last decade, and one soon finds that it has great error-checking sprinkled throughout the implementation. Based on that experience, I can tell you it's tremendously helpful for development efforts if unexpected problems are detected immediately when they occur, as opposed to forcing the programmer to debug based on the wild particles left over after an atom-smashing experiment.
Jul 09 2016
prev sibling next sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 7/9/16 7:44 PM, H. S. Teoh via Digitalmars-d wrote:
 On Sat, Jul 09, 2016 at 07:17:59PM -0400, Andrei Alexandrescu via
Digitalmars-d wrote:
 On 07/09/2016 06:36 PM, Timon Gehr wrote:
 Undefined behaviour means the language semantics don't define a
 successor state for a computation that has not terminated. Do you
 agree with that definition? If not, what /is/ UB in D, and why is it
 called UB?
Yah, I was joking with Walter that effectively the moment you define undefined behavior it's not undefined any longer :o). It happens to the best of us. I think we're all aligned here. There's some interesting interaction here. Consider: int fun(int x) { int[10] y; ... return ++y[9 >> x]; } Now, under the "shift by negative numbers is undefined" rule, the compiler is free to eliminate the bounds check from the indexing because it's always within bounds for all defined programs. If it isn't, memory corruption may ensue. However, if the compiler says "shift by negative numbers is implementation-specified", the the compiler cannot portably eliminate the bounds check.
I find this rather disturbing, actually. There is a fine line between taking advantage of assert's to elide stuff that the programmer promises will not happen, and eliding something that's defined to be UB and thereby resulting in memory corruption.
Nah, this is cut and dried. You should just continue being nicely turbed. "Shifting by a negative integer has undefined behavior" is what it is. Now I'm not saying it's good to define it that way, just that if it's defined that way then these are the consequences.
 In the above example, I'd be OK with the compiler eliding the bounds
 check if there an assert(x >= 0) either in the function body or in the
 in-contract.  Having the compiler elide the bounds check without any
 assert or any other indication that the programmer has made assurances
 that UB won't occur is very scary to me, as plain ole carelessness can
 easily lead to exploitable security holes.  I hope D doesn't become an
 example of this kind of security hole.
Yeah, we'd ideally like very little UB and no UB in safe code. I think we should define shift with out-of-bounds values as "implementation specified".
 At the very least, I'd expect the compiler to warn that the function
 argument may cause UB, and suggest that an in-contract or assert be
 added.
You should expect the compiler to do what the language definition prescribes.
 On a more technical note, I think eliding the bounds check on the
 grounds that shifting by negative x is UB is based on a fallacy.
No.
 Eliding
 a bounds check should only be done when the compiler has the assurance
 that the bounds check is not needed. Just because a particular construct
 is UB does not meet this condition, because, being UB, there is no way
 to tell if the bounds check is needed or not, therefore the correct
 behaviour IMO is to leave the bounds check in. The elision should only
 happen if the compiler is assured that it's actually not needed.

 To elide simply because negative x is UB basically amounts to saying
 "the programmer ought to know better than writing UB code, so therefore
 let's just assume that the programmer never makes a mistake and barge
 ahead fearlessly FTW!". We all know where blind trust in programmer
 reliability leads: security holes galore because humans make mistakes.
 Assuming humans don't make mistakes, which is what this kind of
 exploitation of UB essentially boils down to, leads to madness.
You're overthinking this. Undefined is undefined. We're done here. Andrei
Jul 09 2016
next sibling parent reply Observer <here inter.net> writes:
On Sunday, 10 July 2016 at 02:29:15 UTC, Andrei Alexandrescu 
wrote:
 You're overthinking this. Undefined is undefined. We're done 
 here.
Andrei, you're underthinking this. You're treating it like an elegant academic exercise in an ivory tower, without consideration for the practical realities of using the language productively (i.e., getting direct feedback when the programmer makes mistakes, which we all do, so s/he doesn't need to spend hours in a debugger).
Jul 09 2016
parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 7/9/16 11:52 PM, Observer wrote:
 On Sunday, 10 July 2016 at 02:29:15 UTC, Andrei Alexandrescu wrote:
 You're overthinking this. Undefined is undefined. We're done here.
Andrei, you're underthinking this. You're treating it like an elegant academic exercise in an ivory tower, without consideration for the practical realities of using the language productively (i.e., getting direct feedback when the programmer makes mistakes, which we all do, so s/he doesn't need to spend hours in a debugger).
Oh, I'm all for defining formerly undefined behavior. But don't call it undefined. I'm just fact checking over here. -- Andrei
Jul 09 2016
prev sibling parent reply ketmar <ketmar ketmar.no-ip.org> writes:
On Sunday, 10 July 2016 at 02:29:15 UTC, Andrei Alexandrescu 
wrote:
 Yeah, we'd ideally like very little UB and no UB in safe code.
no at all. define it, and for other cases raise an error. for the example given, the code should not compile. at all. cast the thing to ubyte or face the error. and even then, compiler should insert runtime check for not shifting more than 31/63 bits, just like it does for bounds. at least it should do that in safe code.
Jul 09 2016
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 7/10/16 2:29 AM, ketmar wrote:
 On Sunday, 10 July 2016 at 02:29:15 UTC, Andrei Alexandrescu wrote:
 Yeah, we'd ideally like very little UB and no UB in safe code.
no at all. define it, and for other cases raise an error.
How do you define use of a pointer after deallocation? -- Andrei
Jul 10 2016
parent reply ketmar <ketmar ketmar.no-ip.org> writes:
On Sunday, 10 July 2016 at 11:49:31 UTC, Andrei Alexandrescu 
wrote:
 On 7/10/16 2:29 AM, ketmar wrote:
 On Sunday, 10 July 2016 at 02:29:15 UTC, Andrei Alexandrescu 
 wrote:
 Yeah, we'd ideally like very little UB and no UB in safe code.
no at all. define it, and for other cases raise an error.
How do you define use of a pointer after deallocation? -- Andrei
bug. error. big boom. note that compiler is allowed to not check that, though, and only *then* it is UB -- only if compiler is not implementing that part of the specs.
Jul 10 2016
parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 07/10/2016 08:25 AM, ketmar wrote:
 On Sunday, 10 July 2016 at 11:49:31 UTC, Andrei Alexandrescu wrote:
 On 7/10/16 2:29 AM, ketmar wrote:
 On Sunday, 10 July 2016 at 02:29:15 UTC, Andrei Alexandrescu wrote:
 Yeah, we'd ideally like very little UB and no UB in safe code.
no at all. define it, and for other cases raise an error.
How do you define use of a pointer after deallocation? -- Andrei
bug. error. big boom.
Great spec :o) -- Andrei
Jul 10 2016
prev sibling next sibling parent reply deadalnix <deadalnix gmail.com> writes:
On Saturday, 9 July 2016 at 23:44:07 UTC, H. S. Teoh wrote:
 I find this rather disturbing, actually.  There is a fine line 
 between taking advantage of assert's to elide stuff that the 
 programmer promises will not happen, and eliding something 
 that's defined to be UB and thereby resulting in memory 
 corruption.

 [...]


 T
While I understand how frustrating it looks, there is simply no other way around in practice. For instance, the shift operation on x86 is essentially : x >> (y & ((1 << (typeof(x).sizeof * 8)) - 1)) But will differs on other plateforms. This means that in practice, the compiler would have to add bound checks on every shift. The performance impact would be through the roof, plus, you'd have to specify what to do in case of out of range shift. Contrary to popular belief, the compiler do not try to screw you with UB. There is no code of the form "if this is UB, then so this insanely stupid shit". But what happen is that algorithm A do not explore the UB case - because it is UB - and just do nothing with it, and algorithm B on his side do not check care for UB, but will reuse results from A and do something unexpected. In Andrei's example, the compiler won't say, fuck this guy, he wrote an UB. What will happen is that range checking code will conclude that 9 >> something must be smaller than 10. The the control flow simplification code will use that range to conclude that the bound check must be always true and replace it with an unconditional branch. As you can see the behavior of each component here is fairly reasonable. However, the end result may not be.
Jul 11 2016
parent Walter Bright <newshound2 digitalmars.com> writes:
On 7/11/2016 11:47 AM, deadalnix wrote:
 As you can see the behavior of each component here is fairly reasonable.
 However, the end result may not be.
As was mentioned elsewhere, integers getting indeterminate values only results in memory corruption if the language has an unsafe memory model. The solution is something like this: https://github.com/dlang/dlang.org/pull/1420
Jul 11 2016
prev sibling parent Shachar Shemesh <shachar weka.io> writes:
On 10/07/16 02:44, H. S. Teoh via Digitalmars-d wrote:
 I find this rather disturbing, actually.  There is a fine line between
 taking advantage of assert's to elide stuff that the programmer promises
 will not happen, and eliding something that's defined to be UB and
 thereby resulting in memory corruption.
I like clang's resolution to this problem. On the one hand, leaving things undefined allows the compiler to optimize away cases that would, otherwise, be horrible for performance. On the other hand, these optimizations sometimes turn code that was meant to be okay into really not okay. LLVM, at least for C and C++, has an undefined behavior sanitizer. You can turn it on, and any case where a test that superficial reading of the code suggests takes place, but was optimized away due to undefined behavior, turns into a warning. This allows you to write code in a sane way while not putting in a ton (metric or otherwise, as I won't fight over 10% difference) of security holes. Shachar
Jul 11 2016
prev sibling next sibling parent reply deadalnix <deadalnix gmail.com> writes:
On Friday, 8 July 2016 at 19:26:59 UTC, Andrei Alexandrescu wrote:
 On 07/08/2016 02:42 PM, deadalnix wrote:
 It is meaningless because sometime, you have A and B that are 
 both safe
 on their own, but doing both is unsafe. In which case A or B 
 need to be
 banned, but nothing allows to know which one. This isn't a 
 bug, this is
 a failure to have a principled approach to safety.
What would be a good example? Is there a bug report for it?
For instance: safe int foo(int *iPtr) { return *iPtr; } safe int bar(int[] iSlice) { return foo(iSlice.ptr); } foo assume that creating an invalid pointer is not safe, while bar assume that .ptr is safe as it doesn't access memory. If the slice's size is 0, that is not safe. This is one such case where each of this operation is safe granted some preconditions, but violate each other's preconditions so using both is unsafe.
 The position is inconsistent because the dictatorship refuses 
 to
 compromise on mutually exclusive goals. For instance,  safe is 
 defined
 as ensuring memory safety. But not against undefined behaviors 
 (in fact
 Walter promote the use of UB in various situations, for 
 instance when it
 comes to shared). You CANNOT have undefined behavior that are 
 defined as
 being memory safe.
I agree with that. What would be a good example? Where is the reference to Walter's promotion of UB in safe code?
I don't have a specific reference to point to right now. However, there have been several event of " safe guarantee memory safety, it doesn't protect against X" while X is undefined behavior most of the time.
Jul 11 2016
next sibling parent reply Steven Schveighoffer <schveiguy yahoo.com> writes:
On 7/11/16 1:50 PM, deadalnix wrote:
 On Friday, 8 July 2016 at 19:26:59 UTC, Andrei Alexandrescu wrote:
 On 07/08/2016 02:42 PM, deadalnix wrote:
 It is meaningless because sometime, you have A and B that are both safe
 on their own, but doing both is unsafe. In which case A or B need to be
 banned, but nothing allows to know which one. This isn't a bug, this is
 a failure to have a principled approach to safety.
What would be a good example? Is there a bug report for it?
For instance: safe int foo(int *iPtr) { return *iPtr; } safe int bar(int[] iSlice) { return foo(iSlice.ptr); } foo assume that creating an invalid pointer is not safe, while bar assume that .ptr is safe as it doesn't access memory. If the slice's size is 0, that is not safe.
That was reported and being worked on: https://github.com/dlang/dmd/pull/5860 -Steve
Jul 11 2016
parent reply deadalnix <deadalnix gmail.com> writes:
On Monday, 11 July 2016 at 18:00:20 UTC, Steven Schveighoffer 
wrote:
 On 7/11/16 1:50 PM, deadalnix wrote:
 On Friday, 8 July 2016 at 19:26:59 UTC, Andrei Alexandrescu 
 wrote:
 On 07/08/2016 02:42 PM, deadalnix wrote:
 It is meaningless because sometime, you have A and B that 
 are both safe
 on their own, but doing both is unsafe. In which case A or B 
 need to be
 banned, but nothing allows to know which one. This isn't a 
 bug, this is
 a failure to have a principled approach to safety.
What would be a good example? Is there a bug report for it?
For instance: safe int foo(int *iPtr) { return *iPtr; } safe int bar(int[] iSlice) { return foo(iSlice.ptr); } foo assume that creating an invalid pointer is not safe, while bar assume that .ptr is safe as it doesn't access memory. If the slice's size is 0, that is not safe.
That was reported and being worked on: https://github.com/dlang/dmd/pull/5860 -Steve
Alright, but keep in mind that is an example, not the actual problem I'm talking about. There are many reasonable way to make the example above safe: disallow dereferencing pointers from unknown source, do a bound check on .ptr, disallow .ptr altogether and much more. The root problem is that " safe guarantee memory safety and if it doesn't it is a bug" provides no information as to what is the bug here and no actionable items as to how to fix it, or even as to what needs fixing.
Jul 11 2016
next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 7/11/2016 11:57 AM, deadalnix wrote:
 Alright, but keep in mind that is an example, not the actual problem I'm
 talking about. There are many reasonable way to make the example above
 safe: disallow dereferencing pointers from unknown source,
Once you're in safe code, the assumption is that pointers are valid. Unknown sources are marked trusted, where the programmer takes responsibility to ensure they are valid.
 do a bound check on .ptr, disallow .ptr altogether and much more.
The PR disallows .ptr in safe code. The safe alternative is &a[0] which implies a bounds check.
 The root problem is that " safe guarantee memory safety and if it
 doesn't it is a bug" provides no information as to what is the bug here
 and no actionable items as to how to fix it, or even as to what needs
 fixing.
It's kind of a meaningless criticism. Any piece of code has a bug if it doesn't meet the specification, and there's no way to verify it meets the specification short of proofs, and if anyone wants to work on proofs I'm all for it. In the meantime, please post all holes found to bugzilla and tag them with the 'safe' keyword.
Jul 11 2016
parent reply deadalnix <deadalnix gmail.com> writes:
On Monday, 11 July 2016 at 21:52:36 UTC, Walter Bright wrote:
 The root problem is that " safe guarantee memory safety and if 
 it
 doesn't it is a bug" provides no information as to what is the 
 bug here
 and no actionable items as to how to fix it, or even as to 
 what needs
 fixing.
It's kind of a meaningless criticism. Any piece of code has a bug if it doesn't meet the specification, and there's no way to verify it meets the specification short of proofs, and if anyone wants to work on proofs I'm all for it. In the meantime, please post all holes found to bugzilla and tag them with the 'safe' keyword.
You know, there is a saying: "When the wise point at the moon, the idiot look at the finger". I can't force you to look at the moon, I can only point at it.
Jul 11 2016
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 7/11/2016 5:15 PM, deadalnix wrote:
 On Monday, 11 July 2016 at 21:52:36 UTC, Walter Bright wrote:
 The root problem is that " safe guarantee memory safety and if it
 doesn't it is a bug" provides no information as to what is the bug here
 and no actionable items as to how to fix it, or even as to what needs
 fixing.
It's kind of a meaningless criticism. Any piece of code has a bug if it doesn't meet the specification, and there's no way to verify it meets the specification short of proofs, and if anyone wants to work on proofs I'm all for it. In the meantime, please post all holes found to bugzilla and tag them with the 'safe' keyword.
You know, there is a saying: "When the wise point at the moon, the idiot look at the finger". I can't force you to look at the moon, I can only point at it.
I don't see anything actionable in your comment.
Jul 11 2016
parent reply deadalnix <deadalnix gmail.com> writes:
On Tuesday, 12 July 2016 at 01:28:31 UTC, Walter Bright wrote:
 I don't see anything actionable in your comment.
Defining by which way safe actually ensure safety would be a good start. I'm sorry for the frustration, but the "mention a problem, get asked for an example, provide example, example is debated to death while problem is ignored" cycle have become the typical interraction pattern around here and that is VERY frustrating.
Jul 11 2016
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 7/11/2016 7:23 PM, deadalnix wrote:
 On Tuesday, 12 July 2016 at 01:28:31 UTC, Walter Bright wrote:
 I don't see anything actionable in your comment.
Defining by which way safe actually ensure safety would be a good start. I'm sorry for the frustration, but the "mention a problem, get asked for an example, provide example, example is debated to death while problem is ignored" cycle have become the typical interraction pattern around here and that is VERY frustrating.
The example you gave of .ptr resulting in unsafe code has been in bugzilla since 2013, and has an open PR on it to fix it. https://issues.dlang.org/show_bug.cgi?id=11176 You didn't submit it to bugzilla - if you don't post problems to bugzilla, most likely they will get overlooked, and you will get frustrated. safe issues are tagged with the 'safe' keyword in bugzilla. If you know of other bugs with safe, and they aren't in the list, please add them. Saying generically that safe has holes in it is useless information since it is not actionable and nobody keeps track of bugs posted on the n.g. nor are they even findable if you suspect they're there. ---- If I may rant a bit, lots of posters here posit that with "more process", everything will go better. Meanwhile, we DO have process for bug reports. They go to bugzilla. Posting bugs to the n.g. does not work. More process doesn't work if people are unwilling to adhere to it.
Jul 11 2016
next sibling parent Jack Stouffer <jack jackstouffer.com> writes:
On Tuesday, 12 July 2016 at 04:37:06 UTC, Walter Bright wrote:
 If I may rant a bit, lots of posters here posit that with "more 
 process", everything will go better.
Gah, I hate this idea. It's pervasive in every office in the country. "Oh if we just had better tools we could manage our projects better." Meanwhile the manager on the project hasn't checked in with the engineers in week and probably has no idea what they're working on. It's a people problem 99% of the time.
Jul 11 2016
prev sibling next sibling parent reply Jacob Carlborg <doob me.com> writes:
On 2016-07-12 06:37, Walter Bright wrote:

 The example you gave of .ptr resulting in unsafe code has been in
 bugzilla since 2013, and has an open PR on it to fix it.

   https://issues.dlang.org/show_bug.cgi?id=11176

 You didn't submit it to bugzilla - if you don't post problems to
 bugzilla, most likely they will get overlooked, and you will get
 frustrated.  safe issues are tagged with the 'safe' keyword in bugzilla.
 If you know of other bugs with  safe, and they aren't in the list,
 please add them. Saying generically that  safe has holes in it is
 useless information since it is not actionable and nobody keeps track of
 bugs posted on the n.g. nor are they even findable if you suspect
 they're there.
Not sure if this is what deadalnix thinks about but safe should be a whitelist of features, not a blacklist [1]. But you already closed that bug report as invalid. [1] https://issues.dlang.org/show_bug.cgi?id=12941 -- /Jacob Carlborg
Jul 11 2016
parent deadalnix <deadalnix gmail.com> writes:
On Tuesday, 12 July 2016 at 06:36:18 UTC, Jacob Carlborg wrote:
 Not sure if this is what deadalnix thinks about but  safe 
 should be a whitelist of features, not a blacklist [1]. But you 
 already closed that bug report as invalid.

 [1] https://issues.dlang.org/show_bug.cgi?id=12941
I think that we should have a set of rule that we can look at and that provide an insurance that things are memory safe. Whitelist vs blacklist is an implementation detail (while surely, whitelist seems like a better approach).
Jul 12 2016
prev sibling parent reply deadalnix <deadalnix gmail.com> writes:
On Tuesday, 12 July 2016 at 04:37:06 UTC, Walter Bright wrote:
 If I may rant a bit, lots of posters here posit that with "more 
 process", everything will go better. Meanwhile, we DO have 
 process for bug reports. They go to bugzilla. Posting bugs to 
 the n.g. does not work. More process doesn't work if people are 
 unwilling to adhere to it.
If you think I'm advocating for more process, you've been mislead. More process doesn't work, in general. Even if people are willing to adhere to it. If you think the issue I have is with one specific bug, same thing.
Jul 12 2016
parent Walter Bright <newshound2 digitalmars.com> writes:
On 7/12/2016 12:20 AM, deadalnix wrote:
 If you think the issue I have is with one specific bug, same thing.
There aren't any open issues with 'safe' that have been reported by you. Of the open issues, I don't think any of them show anything fundamentally broken about safe. If you've got other specific issues in mind, please file them.
Jul 12 2016
prev sibling parent Kagamin <spam here.lot> writes:
On Monday, 11 July 2016 at 18:57:51 UTC, deadalnix wrote:
 Alright, but keep in mind that is an example, not the actual 
 problem I'm talking about. There are many reasonable way to 
 make the example above safe: disallow dereferencing pointers 
 from unknown source, do a bound check on .ptr, disallow .ptr 
 altogether and much more.

 The root problem is that " safe guarantee memory safety and if 
 it doesn't it is a bug" provides no information as to what is 
 the bug here and no actionable items as to how to fix it, or 
 even as to what needs fixing.
Saw it on reddit: how rust manages safety bugs: https://www.reddit.com/r/programming/comments/4vto4r/inside_the_fastest_font_renderer_in_the_world/d61ltp8
Aug 05 2016
prev sibling next sibling parent Walter Bright <newshound2 digitalmars.com> writes:
On 7/11/2016 10:50 AM, deadalnix wrote:
 foo assume that creating an invalid pointer is not safe, while bar
 assume that .ptr is safe as it doesn't access memory. If the slice's
 size is 0, that is not safe.
There's a PR to fix this: https://github.com/dlang/dmd/pull/5860
Jul 11 2016
prev sibling parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 07/11/2016 01:50 PM, deadalnix wrote:
 On Friday, 8 July 2016 at 19:26:59 UTC, Andrei Alexandrescu wrote:
 On 07/08/2016 02:42 PM, deadalnix wrote:
 It is meaningless because sometime, you have A and B that are both safe
 on their own, but doing both is unsafe. In which case A or B need to be
 banned, but nothing allows to know which one. This isn't a bug, this is
 a failure to have a principled approach to safety.
What would be a good example? Is there a bug report for it?
For instance: safe int foo(int *iPtr) { return *iPtr; } safe int bar(int[] iSlice) { return foo(iSlice.ptr); }
Here bar should not pass the safe test because it may produce a non-dereferenceable pointer. Consider: safe int[] baz(int[] a) { return bar(a[$ .. $]; } It is legal (and safe) to take an empty slice at the end of an array. Following the call, bar serves foo an invalid pointer that shan't be dereferenced. I added https://issues.dlang.org/show_bug.cgi?id=16266. It looks to me like a corner case rather than an illustration of a systemic issue.
 foo assume that creating an invalid pointer is not safe, while bar
 assume that .ptr is safe as it doesn't access memory. If the slice's
 size is 0, that is not safe.

 This is one such case where each of this operation is safe granted some
 preconditions, but violate each other's preconditions so using both is
 unsafe.

 The position is inconsistent because the dictatorship refuses to
 compromise on mutually exclusive goals. For instance,  safe is defined
 as ensuring memory safety. But not against undefined behaviors (in fact
 Walter promote the use of UB in various situations, for instance when it
 comes to shared). You CANNOT have undefined behavior that are defined as
 being memory safe.
I agree with that. What would be a good example? Where is the reference to Walter's promotion of UB in safe code?
I don't have a specific reference to point to right now.
"Don't do the crime if you can't do the time." Andrei
Jul 11 2016
prev sibling parent reply Shachar Shemesh <shachar weka.io> writes:
On 08/07/16 22:26, Andrei Alexandrescu wrote:

 I agree with that. What would be a good example? Where is the reference
 to Walter's promotion of UB in  safe code?


 Andrei
I don't have an example by Walter, but I can give you an example by Andrei. In D-Conf. On Stage. During the keynotes. Immediately after knocking down C++ for doing the precise same thing, but in a that is both defined and less likely to produce errors. The topic was reference counting's interaction with immutable (see deadalnix's comment, to which I completely agree, about inter-features interactions). When asked (by me) how you intend to actually solve this, you said that since you know where the memory comes from, you will cast away the immutability. Casting away immutability is UB in D. Not long before that, you laughed at C++ for it's "mutable" keyword, which allows doing this very thing in a way that is: A. Fully defined (if you know what you're doing) and B. Not requiring a cast C++ fully defines when it is okay to cast away constness, gives you aids so that you know that that's what you are doing, and nothing else, and gives you a method by which you can do it without a cast if the circumstances support it. D says any such cast is UB. Shachar
Jul 11 2016
next sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 07/12/2016 01:15 AM, Shachar Shemesh wrote:
 On 08/07/16 22:26, Andrei Alexandrescu wrote:

 I agree with that. What would be a good example? Where is the reference
 to Walter's promotion of UB in  safe code?


 Andrei
I don't have an example by Walter, but I can give you an example by Andrei. In D-Conf. On Stage. During the keynotes. Immediately after knocking down C++ for doing the precise same thing, but in a that is both defined and less likely to produce errors.
Love the drama. I was quite excited to see what follows :o).
 The topic was reference counting's interaction with immutable (see
 deadalnix's comment, to which I completely agree, about inter-features
 interactions).
Amaury failed to produce an example to support his point, aside from a rehash of a bug report from 2013 that is virtually fixed. Do you have any?
 When asked (by me) how you intend to actually solve this,
 you said that since you know where the memory comes from, you will cast
 away the immutability.

 Casting away immutability is UB in D.
I understand. There is an essential detail that sadly puts an anticlimactic end to the telenovela. The unsafe cast happens at allocator level. Inside any memory allocator, there is a point at which behavior outside the type system happens: memory that is untyped becomes typed, and vice versa (during deallocation). As long as you ultimately use system primitives from getting untyped bytes, at some point you'll operate outside the type system. It stands to reason, then, that at allocator level information and manipulations outside the type system's capabilities are possible and level so long as such manipulations are part of the standard library and offer defined behavior. This is par for the course in C++ and any systems language. The solution (very ingenious, due to dicebot) in fact does not quite cast immutability away. Starting from a possibly immutable pointer, it subtracts an offset from it. At that point the memory is not tracked by the type system, but known to the allocator to contain metadata associated with the pointer that had been allocated with it. After the subtraction, the cast exposes the data which is mutable without violating the immutability of the object proper. As I said, it's quite an ingenious solution.
 Not long before that, you laughed at C++ for it's "mutable" keyword,
 which allows doing this very thing in a way that is:
 A. Fully defined (if you know what you're doing)
 and
 B. Not requiring a cast
I think we're in good shape with what we have; mutable has too much freedom and it's good to get away without it. Andrei
Jul 11 2016
next sibling parent reply deadalnix <deadalnix gmail.com> writes:
On Tuesday, 12 July 2016 at 05:33:00 UTC, Andrei Alexandrescu 
wrote:
 Amaury failed to produce an example to support his point, aside 
 from a rehash of a bug report from 2013 that is virtually 
 fixed. Do you have any?
Finger moon. I presented maybe 5 exemple of what I'm talking about already and it is still not enough. You guys keep wanting to discuss every single example to death to avoid doing the hard thinking.
 I think we're in good shape with what we have; mutable has too 
 much freedom and it's good to get away without it.
True but once again, finger, moon, etc...
Jul 12 2016
next sibling parent Walter Bright <newshound2 digitalmars.com> writes:
On 7/12/2016 12:23 AM, deadalnix wrote:
 I presented maybe 5 exemple of what I'm talking about already
Links, please. There are perhaps 300,000 messages in this n.g.
Jul 12 2016
prev sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 07/12/2016 03:23 AM, deadalnix wrote:
 On Tuesday, 12 July 2016 at 05:33:00 UTC, Andrei Alexandrescu wrote:
 Amaury failed to produce an example to support his point, aside from a
 rehash of a bug report from 2013 that is virtually fixed. Do you have
 any?
Finger moon. I presented maybe 5 exemple of what I'm talking about already and it is still not enough. You guys keep wanting to discuss every single example to death to avoid doing the hard thinking.
 I think we're in good shape with what we have; mutable has too much
 freedom and it's good to get away without it.
True but once again, finger, moon, etc...
Indeed I'm not the sharpest tool in the shed, and since it's already been established I'm the idiot and you're the wise man (congratulations - surely enough the great work to substantiate that is very soon to follow) in the proverb, I hope you'll allow me one more pedestrian question. So I've been looking through this thread for the five examples of what you're talking about (which to my mind is " safe is just a convention") and the closest I could find is your post on http://forum.dlang.org/post/iysrtqzytdnrxsqtfwvk forum.dlang.org. So there you discuss the inconsistency of "alias" which as far as I understand has nothing to do with safety. Then we have: enum E { A = 1, B = 2 } E bazinga = A | B; final switch (bazinga) { case A: ... case B: ... } // Enjoy ! which I pasted with minor changes here: https://dpaste.dzfl.pl/b4f84374c3ae. I'm unclear how that interacts with safe. It could, if the language would allow executing unsafe code after the switch. But it doesn't. Could you please clarify? And could you please point to the other examples? Thanks, Andrei
Jul 12 2016
parent reply deadalnix <deadalnix gmail.com> writes:
On Tuesday, 12 July 2016 at 14:17:30 UTC, Andrei Alexandrescu 
wrote:
 Indeed I'm not the sharpest tool in the shed, and since it's 
 already been established I'm the idiot and you're the wise man 
 (congratulations - surely enough the great work to substantiate 
 that is very soon to follow) in the proverb, I hope you'll 
 allow me one more pedestrian question.
Proverb are not meant to be interpreted literally. If I'd think you are an actual idiot, I wouldn't waste my time arguing with you.
 So I've been looking through this thread for the five examples 
 of what you're talking about (which to my mind is " safe is 
 just a convention") and the closest I could find is your post 
 on 
 http://forum.dlang.org/post/iysrtqzytdnrxsqtfwvk forum.dlang.org.

 So there you discuss the inconsistency of "alias" which as far 
 as I understand has nothing to do with safety. Then we have:

 enum E { A = 1, B = 2 }
 E bazinga = A | B;
 final switch (bazinga) { case A: ... case B: ... } // Enjoy !

 which I pasted with minor changes here: 
 https://dpaste.dzfl.pl/b4f84374c3ae. I'm unclear how that 
 interacts with  safe. It could, if the language would allow 
 executing unsafe code after the switch. But it doesn't. Could 
 you please clarify? And could you please point to the other 
 examples?


 Thanks,

 Andrei
My point has nothing to do with safety, and this is why various example have nothing to do with safety. Safety was an example. The enum/final switch thing was another. The alias thing again one more. These are issue with which, each individual decision in isolation is actually very reasonable, but simply fail as a whole because these individual decision are either mutually exclusive (worst case scenario), or simply introduce needless complexity (best case scenario), both of which are undesirable. The thread was about complexity in the language. My point is that the current way things are done introduce a lot of accidental complexity, which is overall undesirable. This impact negatively various aspects of the languages, including, but not limited to, safe . The problem I'm pointing at is that problems are considered in isolation, with disregard to the big picture. Ironically, this is exactly what is happening here, by debating every example to death rather than on the point. Maybe I'm not expressing myself badly, but I discussed this with many other D community members and I seem to be able to reach them, so it must not be THAT bad.
Jul 12 2016
next sibling parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 07/12/2016 02:37 PM, deadalnix wrote:
 On Tuesday, 12 July 2016 at 14:17:30 UTC, Andrei Alexandrescu wrote:
 Indeed I'm not the sharpest tool in the shed, and since it's already
 been established I'm the idiot and you're the wise man
 (congratulations - surely enough the great work to substantiate that
 is very soon to follow) in the proverb, I hope you'll allow me one
 more pedestrian question.
Proverb are not meant to be interpreted literally.
Also proverbs are not licenses for people to be jerks.
 So I've been looking through this thread for the five examples of what
 you're talking about (which to my mind is " safe is just a
 convention") and the closest I could find is your post on
 http://forum.dlang.org/post/iysrtqzytdnrxsqtfwvk forum.dlang.org.

 So there you discuss the inconsistency of "alias" which as far as I
 understand has nothing to do with safety. Then we have:

 enum E { A = 1, B = 2 }
 E bazinga = A | B;
 final switch (bazinga) { case A: ... case B: ... } // Enjoy !

 which I pasted with minor changes here:
 https://dpaste.dzfl.pl/b4f84374c3ae. I'm unclear how that interacts
 with  safe. It could, if the language would allow executing unsafe
 code after the switch. But it doesn't. Could you please clarify? And
 could you please point to the other examples?


 Thanks,

 Andrei
My point has nothing to do with safety, and this is why various example have nothing to do with safety. Safety was an example. The enum/final switch thing was another. The alias thing again one more.
Where are the others? Andrei
Jul 12 2016
prev sibling next sibling parent Walter Bright <newshound2 digitalmars.com> writes:
On 7/12/2016 11:37 AM, deadalnix wrote:
 My point has nothing to do with safety, and this is why various example
 have nothing to do with safety.
So the only known issues with safe are already in bugzilla. This is good information.
Jul 12 2016
prev sibling parent reply Kagamin <spam here.lot> writes:
On Tuesday, 12 July 2016 at 18:37:20 UTC, deadalnix wrote:
 The thread was about complexity in the language. My point is 
 that the current way things are done introduce a lot of 
 accidental complexity, which is overall undesirable. This 
 impact negatively various aspects of the languages, including, 
 but not limited to,  safe .

 The problem I'm pointing at is that problems are considered in 
 isolation, with disregard to the big picture. Ironically, this 
 is exactly what is happening here, by debating every example to 
 death rather than on the point.
Software design is an iterative process because one can't sort everything at once. Recommended read: https://www.amazon.com/Code-Complete-Practical-Handbook-Construction/dp/0735619670/
Jul 13 2016
next sibling parent reply Chris <wendlec tcd.ie> writes:
On Wednesday, 13 July 2016 at 17:30:53 UTC, Kagamin wrote:
 On Tuesday, 12 July 2016 at 18:37:20 UTC, deadalnix wrote:

 Software design is an iterative process because one can't sort 
 everything at once. Recommended read: 
 https://www.amazon.com/Code-Complete-Practical-Handbook-Construction/dp/0735619670/
If you look at the reviews that have less than four stars, the book doesn't seem to be very useful for people who are not beginners.
Jul 14 2016
parent Kagamin <spam here.lot> writes:
On Thursday, 14 July 2016 at 09:44:15 UTC, Chris wrote:
 If you look at the reviews that have less than four stars, the 
 book doesn't seem to be very useful for people who are not 
 beginners.
I meant high-level knowledge about software design in chapter 5.
Jul 14 2016
prev sibling parent reply Chris <wendlec tcd.ie> writes:
On Wednesday, 13 July 2016 at 17:30:53 UTC, Kagamin wrote:

 Software design is an iterative process because one can't sort 
 everything at once.
Not true. Ola can. :) (I just couldn't resist ...)
Jul 14 2016
parent reply Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= writes:
On Thursday, 14 July 2016 at 11:38:59 UTC, Chris wrote:
 On Wednesday, 13 July 2016 at 17:30:53 UTC, Kagamin wrote:

 Software design is an iterative process because one can't sort 
 everything at once.
Not true. Ola can. :) (I just couldn't resist ...)
I don't have time for a long rant on this... But if you are designing a truly new langauge (and D isn't), then you create prototypes, then you build a framework that is suitable for evolutionary design, then you spec it, then you try to prove it sound, then you implement it then you trash it, and redesign it and write a new spec. Once you have a foundation where most things can be expressed in libraries you have a good base for iterating and handing it to the world. Of course, the first thing you ought to do is to look at existing knowhow related to language design. That's a no-brainer. The alternative, to just iterate, is what gives you languages like Perl and Php.
Jul 14 2016
next sibling parent reply Kagamin <spam here.lot> writes:
On Thursday, 14 July 2016 at 12:12:34 UTC, Ola Fosheim Grøstad 
wrote:
 I don't have time for a long rant on this... But if you are 
 designing a truly new langauge  (and D isn't), then you create 
 prototypes, then you build a framework that is suitable for 
 evolutionary design, then you spec it, then you try to prove it 
 sound, then you implement it then you trash it, and redesign it 
 and write a new spec.
What's the reason to implement what can't work even for you alone?
Jul 14 2016
parent reply Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= writes:
On Thursday, 14 July 2016 at 12:38:44 UTC, Kagamin wrote:
 What's the reason to implement what can't work even for you 
 alone?
Huh? I need to explain the purpose of building prototypes?
Jul 14 2016
parent reply Kagamin <spam here.lot> writes:
On Thursday, 14 July 2016 at 12:57:06 UTC, Ola Fosheim Grøstad 
wrote:
 Huh?  I need to explain the purpose of building prototypes?
You mean your process describes building prototypes only?
Jul 14 2016
parent reply Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= writes:
On Thursday, 14 July 2016 at 13:11:36 UTC, Kagamin wrote:
 On Thursday, 14 July 2016 at 12:57:06 UTC, Ola Fosheim Grøstad 
 wrote:
 Huh?  I need to explain the purpose of building prototypes?
You mean your process describes building prototypes only?
Yes? You cannot easily iterate over the design of the core language without creating a mess. You can iterate the design of libraries and to some extent syntactical sugar.
Jul 14 2016
parent Kagamin <spam here.lot> writes:
On Thursday, 14 July 2016 at 13:24:51 UTC, Ola Fosheim Grøstad 
wrote:
 You mean your process describes building prototypes only?
Yes? You cannot easily iterate over the design of the core language without creating a mess. You can iterate the design of libraries and to some extent syntactical sugar.
You create something when you ship a product. If you build only prototypes, you don't achieve the goal, you described a process of not creating anything, expectedly it bore no fruit.
Jul 15 2016
prev sibling parent reply Chris <wendlec tcd.ie> writes:
On Thursday, 14 July 2016 at 12:12:34 UTC, Ola Fosheim Grøstad 
wrote:
 On Thursday, 14 July 2016 at 11:38:59 UTC, Chris wrote:
 On Wednesday, 13 July 2016 at 17:30:53 UTC, Kagamin wrote:

 Software design is an iterative process because one can't 
 sort everything at once.
Not true. Ola can. :) (I just couldn't resist ...)
I don't have time for a long rant on this...
Now, now. Where's your sense of humor?
 But if you are designing a truly new langauge  (and D isn't), 
 then you create prototypes, then you build a framework that is 
 suitable for evolutionary design, then you spec it, then you 
 try to prove it sound, then you implement it then you trash it, 
 and redesign it and write a new spec. Once you have a 
 foundation where most things can be expressed in libraries you 
 have a good base for iterating and handing it to the world.
Such a language will never see the light of day. Never. And given the constant changes in the IT business, you'll have to constantly trash and re-implement things. Nobody will be able to use the language in the real world, and it's using a language in the real world that shows you where a language's strengths and weaknesses are. I fear that some of the younger languages are taking that path. They will be ready for use by the time we'll have quark based processors or switched to telepathy altogether :-) What makes a language attractive is that you can actually use it - here and now.
 Of course, the first thing you ought to do is to look at 
 existing knowhow related to language design.
Which is what D did.
 That's a no-brainer.

 The alternative, to just iterate, is what gives you languages 
 like Perl and Php.
... which, in fairness, where never meant to be carefully designed languages. Just convenient hacks for everyday tasks.
Jul 14 2016
next sibling parent reply Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= writes:
On Thursday, 14 July 2016 at 13:26:06 UTC, Chris wrote:
 Such a language will never see the light of day.
Many such languages exist.
 What makes a language attractive is that you can actually use 
 it - here and now.
What makes a language attractive is that it has system support and provides solutions that save time. That's what languages like I follow several languages that are very attractive, but that I cannot use because they don't have system support. I am also using languages that are less attractive than the alternatives for the same reasons.
 Of course, the first thing you ought to do is to look at 
 existing knowhow related to language design.
Which is what D did.
No, it did not build on existing knowhow in language design theory, it was a fair reinterpretation of the C++ programming model with a tiny bit of Pythonesque extensions.
 ... which, in fairness, where never meant to be carefully 
 designed languages. Just convenient hacks for everyday tasks.
Perl and Php started as small and convenient scripting languages, but they were predominantly evolved in an iterative fashion for decades after that, and aggregated a lot of issues related to exactly iterative evolution. Both C++ and D shows clear signs of their abstraction mechanisms not fitting well with the core language. Too many mechanisms, not generic enough. And that happened because significant changes came late in the process, after deployment. You can say the same thing about Go and error-handling, it sticks out like a sore thumb.
Jul 14 2016
parent reply Chris <wendlec tcd.ie> writes:
On Thursday, 14 July 2016 at 13:39:48 UTC, Ola Fosheim Grøstad 
wrote:
 On Thursday, 14 July 2016 at 13:26:06 UTC, Chris wrote:
 Such a language will never see the light of day.
Many such languages exist.
Like? I mean languages that can be used in the real world. Certainly not Nim. It's not usable yet, it may change drastically any time.
 What makes a language attractive is that you can actually use 
 it - here and now.
What makes a language attractive is that it has system support and provides solutions that save time. That's what languages
... and they're all usable as in I can write software with them right now.
 I follow several languages that are very attractive, but that I 
 cannot use because they don't have system support. I am also 
 using languages that are less attractive than the alternatives 
 for the same reasons.

 Of course, the first thing you ought to do is to look at 
 existing knowhow related to language design.
Which is what D did.
No, it did not build on existing knowhow in language design theory, it was a fair reinterpretation of the C++ programming model with a tiny bit of Pythonesque extensions.
Examples?
 ... which, in fairness, where never meant to be carefully 
 designed languages. Just convenient hacks for everyday tasks.
Perl and Php started as small and convenient scripting languages, but they were predominantly evolved in an iterative fashion for decades after that, and aggregated a lot of issues related to exactly iterative evolution.
Yes, exactly, they were never meant to be big languages, just scripting tools. Same happened to Python in a way. It should never have left the lab and infected millions of people.
 Both C++ and D shows clear signs of their abstraction 
 mechanisms not fitting well with the core language. Too many 
 mechanisms, not generic enough. And that happened because 
 significant changes came late in the process, after deployment. 
 You can say the same thing about Go and error-handling, it 
 sticks out like a sore thumb.
There's never _the_ perfect time to deploy a language, just like there's never _the_ perfect time to buy a computer, the moment you leave the shop it's out of date. You're dreaming of a language that only exists in cloud cuckoo land and it will get you nowhere. But of course, it's much easier to criticize the players on the pitch from your comfy armchair than to actually go onto the pitch and play yourself. No language ever gets it 100% right, because there are conflicting demands, and it's trivial to point out flaws that are bound to arise out of conflicting demands.
Jul 14 2016
parent reply Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= writes:
On Thursday, 14 July 2016 at 14:11:25 UTC, Chris wrote:
 On Thursday, 14 July 2016 at 13:39:48 UTC, Ola Fosheim Grøstad 
 wrote:
 On Thursday, 14 July 2016 at 13:26:06 UTC, Chris wrote:
 Such a language will never see the light of day.
Many such languages exist.
Didn't I say that I don't have time for a long rant? :-) Simula is a pretty good example. In it's first incarnation it was a simulation language, then it was reformulated as a general purpose language. Dart is a pretty good example. The extensions have been primarily syntactical AFAIK. TypeScript is a pretty good example of a language that is both stable and wildly expansive, as the core language is JavaScript and TypeScript is a layer above the core language. It is getting pretty good actually. Go is a reasonable example (despite the error handling blunder). It has not changed much in terms of the core language and IIRC it is based on earlier languages by the same authors. The JVM is also a decent example of a core language that is fairly stable. It as based on the StrongTalk VM. I could go on for a while.
 ... and they're all usable as in I can write software with them 
 right now.
Plenty of languages are usable, both Pony and Pure are usable, but they are not widely used. So it currently does not make much sense for me to use them as they most likely would cause more trouble than they would save me.
 Examples?
Slices etc.
 Yes, exactly, they were never meant to be big languages, just 
 scripting tools. Same happened to Python in a way. It should 
 never have left the lab and infected millions of people.
Python was informed by an educational language, but was designed to appeal to professionals, so I don't think it applies to Python as much.
 There's never _the_ perfect time to deploy a language, just 
 like there's never _the_ perfect time to buy a computer, the 
 moment you leave the shop it's out of date.
Huh, I don't understand the comparison? Anyhow, I don't upgrade until I hit some unacceptable performance issues. I have a 4 year old computer and have no performance issues with it yet. :-P There is very little advantage for me to have a faster computer than where the software is deployed.
 You're dreaming of a language that only exists in cloud cuckoo 
 land and it will get you nowhere.
Nope. Such languages exits, but they are not _system level_ programming languages.
 But of course, it's much easier to criticize the players on the 
 pitch from your comfy armchair than to actually go onto the 
 pitch and play yourself.
What makes you think that I am not playing? What you actually are saying is that one should not make judgments about programming languages or try to influence their direction.
 No language ever gets it 100% right, because there are 
 conflicting demands, and it's trivial to point out flaws that 
 are bound to arise out of conflicting demands.
What conflicting demands do you suggest applies to D? I don't see them, so I hope you can inform me.
Jul 14 2016
next sibling parent reply Chris <wendlec tcd.ie> writes:
On Thursday, 14 July 2016 at 14:46:50 UTC, Ola Fosheim Grøstad 
wrote:
 On Thursday, 14 July 2016 at 14:11:25 UTC, Chris wrote:
 On Thursday, 14 July 2016 at 13:39:48 UTC, Ola Fosheim Grøstad 
 wrote:
 On Thursday, 14 July 2016 at 13:26:06 UTC, Chris wrote:
 Such a language will never see the light of day.
Many such languages exist.
Didn't I say that I don't have time for a long rant? :-) Simula is a pretty good example. In it's first incarnation it was a simulation language, then it was reformulated as a general purpose language. Dart is a pretty good example. The extensions have been primarily syntactical AFAIK. TypeScript is a pretty good example of a language that is both stable and wildly expansive, as the core language is JavaScript and TypeScript is a layer above the core language. It is getting pretty good actually.
 Go is a reasonable example (despite the error handling 
 blunder). It has not changed much in terms of the core language 
 and IIRC it is based on earlier languages by the same authors.

 The JVM is also a decent example of a core language that is 
 fairly stable. It as based on the StrongTalk VM.

 I could go on for a while.
I'm not convinced. Dart & TypeScript are scripting languages for the Internet and cannot be compared to D and C++. Go is an Internet/server language, very bare bones and designed for Google's code mines. All three languages above where designed by big companies with certain commercial goals in mind. I don't know much about Simula (your patriotic choice :), but it's pure OOP and as such cannot be compared to D either (which is multi-paradigm).
 ... and they're all usable as in I can write software with 
 them right now.
Plenty of languages are usable, both Pony and Pure are usable, but they are not widely used. So it currently does not make much sense for me to use them as they most likely would cause more trouble than they would save me.
But you don't use them for a reason.
 Examples?
Slices etc.
 Yes, exactly, they were never meant to be big languages, just 
 scripting tools. Same happened to Python in a way. It should 
 never have left the lab and infected millions of people.
Python was informed by an educational language, but was designed to appeal to professionals, so I don't think it applies to Python as much.
But only in a "lab environment".
 There's never _the_ perfect time to deploy a language, just 
 like there's never _the_ perfect time to buy a computer, the 
 moment you leave the shop it's out of date.
Huh, I don't understand the comparison?
I.e you have to deploy it at some point, it will never be perfect before you deploy it - just as you have to buy a computer at some point. If you keep waiting for the next generation, you'll never buy a computer (has happened!).
 Anyhow, I don't upgrade until I hit some unacceptable 
 performance issues. I have a 4 year old computer and have no 
 performance issues with it yet. :-P

 There is very little advantage for me to have a faster computer 
 than where the software is deployed.


 You're dreaming of a language that only exists in cloud cuckoo 
 land and it will get you nowhere.
Nope. Such languages exits, but they are not _system level_ programming languages.
So they don't exist, because the perfect language is also a system level language.
 But of course, it's much easier to criticize the players on 
 the pitch from your comfy armchair than to actually go onto 
 the pitch and play yourself.
What makes you think that I am not playing? What you actually are saying is that one should not make judgments about programming languages or try to influence their direction.
I don't know whether I should be sad or angry to see someone with your knowledge and experience wasting his time endlessly ranting about D (while praising every other language under the sun). You could make valuable hands-on contributions to D, but choose to be the Statler & Waldorf of the community - only without the fun factor. It's tiresome and doesn't get us anywhere.
 No language ever gets it 100% right, because there are 
 conflicting demands, and it's trivial to point out flaws that 
 are bound to arise out of conflicting demands.
What conflicting demands do you suggest applies to D? I don't see them, so I hope you can inform me.
E.g. low-level control vs. safety (cf. the discussion about casting away immutable)
Jul 14 2016
parent reply Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= writes:
On Thursday, 14 July 2016 at 15:28:45 UTC, Chris wrote:
 I don't know much about Simula (your patriotic choice :), but 
 it's pure OOP and as such cannot be compared to D either (which 
 is multi-paradigm).
It wasn't pure OOP, not sure what you mean by that either. Not sure what you mean by calling D multi-paradigm.
 I.e you have to deploy it at some point, it will never be 
 perfect before you deploy it - just as you have to buy a 
 computer at some point. If you keep waiting for the next 
 generation, you'll never buy a computer (has happened!).
I still don't get the comparison. I don't buy a new computer until I am running out of RAM. Speed is no longer a big issue for me, not even with C++ compilation speed.
 So they don't exist, because the perfect language is also a 
 system level language.
Who has been talking about perfect? Geez, system programming languages are lightyears away from perfect. And they are way way behind high level ones.
 It's tiresome and doesn't get us anywhere.
Then don't argue the point without having a real argument against it. If your motivation is entirely defensive then you don't really achieve anything. If your motivation is informational, then it can achieve something. E.g. you could enlighten me. I don't agree with you that knowledge doesn't get people anywhere. I think it does, it just takes a lot of time, depending on where they come from. I don't know much about Andrei, but Walter does move over time.
 E.g. low-level control vs. safety (cf. the discussion about 
 casting away immutable)
I don't think that is a very good argument. All it tells me is that D's approach to safety isn't working and that you need to do this by static analysis over a much simpler core language.
Jul 14 2016
parent reply Chris <wendlec tcd.ie> writes:
On Thursday, 14 July 2016 at 15:59:30 UTC, Ola Fosheim Grøstad 
wrote:

 It wasn't pure OOP, not sure what you mean by that either.

 Not sure what you mean by calling D multi-paradigm.
As opposed to Java that is 100% OOP (well 99%).
 I still don't get the comparison. I don't buy a new computer 
 until I am running out of RAM. Speed is no longer a big issue 
 for me, not even with C++ compilation speed.
Ok, this is called a metaphor, a figure of speech. I'll translate it for you: To wait until a language is perfect, before you deploy it is like constantly waiting for the next generation of computers to come out, before you buy one. I.e. it will never happen, because there will always a next generation that is even better. But, uh, you do get it, don't you?
 So they don't exist, because the perfect language is also a 
 system level language.
Who has been talking about perfect? Geez, system programming languages are lightyears away from perfect. And they are way way behind high level ones.
And why is that so? Is it because of inherent difficulties to marry low-level functionality with high-level concepts? No, it's because language designers are stooooopid [<= irony]
 It's tiresome and doesn't get us anywhere.
Then don't argue the point without having a real argument against it. If your motivation is entirely defensive then you don't really achieve anything. If your motivation is informational, then it can achieve something. E.g. you could enlighten me. I don't agree with you that knowledge doesn't get people anywhere. I think it does, it just takes a lot of time, depending on where they come from. I don't know much about Andrei, but Walter does move over time.
Except your knowledge is not focused and thus doesn't help anyone, random rants on whatever topic comes up instead of a focused plan of action with proofs of concept and possibly contributions to the core language. How could anyone keep track of not to mention act on criticism that is scattered out all over threads.
 E.g. low-level control vs. safety (cf. the discussion about 
 casting away immutable)
I don't think that is a very good argument. All it tells me is that D's approach to safety isn't working and that you need to do this by static analysis over a much simpler core language.
Or is it an intricate problem that's not trivial to solve? But this is going nowhere ...
Jul 14 2016
parent reply Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= writes:
On Thursday, 14 July 2016 at 17:36:59 UTC, Chris wrote:
 On Thursday, 14 July 2016 at 15:59:30 UTC, Ola Fosheim Grøstad 
 wrote:
 Not sure what you mean by calling D multi-paradigm.
As opposed to Java that is 100% OOP (well 99%).
Which programming model is it that D supports that Java doesn't? Functional? Logic? ...?
 Ok, this is called a metaphor, a figure of speech.
Poor metaphor. :)
But, uh, you do get it, don't you?
That's right, I don't get it, and it isn't true. Walter's vision obviously changed with D2, it was a shift in the core original vision which focused on creating a significantly simpler language than C++. That's perfectly ok, a change in personal interests towards a more ambitious vision is perfectly ok. But it has an impact on the outcome, obviously.
 And why is that so? Is it because of inherent difficulties to 
 marry low-level functionality with high-level concepts? No, 
 it's because language designers are stooooopid [<= irony]
Poor irony too... It is so because: 1. system level programming language design has very little academic value 2. it is very difficult to unseat C/C++ which is doing a fair job of it 3. because portability is very very important and difficult 4. because high level languages often try to provide solutions to specific areas
 contributions to the core language. How could anyone keep track 
 of not to mention act on criticism that is scattered out all 
 over threads.
Oh, you don't have to. I am backing those that are arguing for reasonable positions and will do so for as long as I think that will move the project to a more interesting position. Please don't try to make yourself look like a martyr.
 Or is it an intricate problem that's not trivial to solve?
I very seldom run into memory related issues unless I do pointer arithmetic, which safe does not help with. If it is hard to solve, the solution is easy: postpone it until you have something on paper that can work...
Jul 14 2016
parent reply Chris <wendlec tcd.ie> writes:
On Thursday, 14 July 2016 at 18:00:36 UTC, Ola Fosheim Grøstad 
wrote:

 Please don't try to make yourself look like a martyr.
Huh? Where is that coming from all of a sudden? Sorry, I don't see the point of this comment. A martyr for what? Martyrs are stupid people, they should have stayed at home enjoying a nice fresh beer instead of trying to change the course or the world single-handedly. Maybe you should have one too, a beer that is, and think about becoming a contributor to D.
Jul 14 2016
parent reply Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= writes:
On Thursday, 14 July 2016 at 18:23:54 UTC, Chris wrote:
 On Thursday, 14 July 2016 at 18:00:36 UTC, Ola Fosheim Grøstad 
 wrote:

 Please don't try to make yourself look like a martyr.
Huh? Where is that coming from all of a sudden? Sorry, I don't see the point of this comment.
You were going ad hominem for no good reason. Here is a pretty good rule: if you don't think you will get something out of an discussion, don't engage in it. I personally find that I learn a lot from discussions on language design, even when other people are completely wrong. You have your own view of what is needed, I have a completely different view. You cannot impose your view of what is need on me, it won't work without a good argument to back it up. My view is that the position that some are arguing holds: the core language has to be stripped down of special casing in order to make major progress. Aka: one step back, two steps forwards. If it makes you happy: I am from time to time looking at various ways to modify floating point behaviour, but it won't really matter until complexity is cut back. Because it could easily become another complexity layer on top of what is already there. The best way to improve on D is not to add more complexity, but to cut back to a cleaner core language. I think you are taking a way too convenient position, somehow pretending that there are no major hurdles to overcome in terms of mindshare. My view is that mindshare is the most dominating problem, e.g. changing viewpoints through arguments is really the only option at the moment. What other options are there?
Jul 14 2016
parent reply Chris <wendlec tcd.ie> writes:
On Thursday, 14 July 2016 at 18:36:26 UTC, Ola Fosheim Grøstad 
wrote:
 On Thursday, 14 July 2016 at 18:23:54 UTC, Chris wrote:
 On Thursday, 14 July 2016 at 18:00:36 UTC, Ola Fosheim Grøstad 
 wrote:
You were going ad hominem for no good reason. Here is a pretty good rule: if you don't think you will get something out of an discussion, don't engage in it. I personally find that I learn a lot from discussions on language design, even when other people are completely wrong. You have your own view of what is needed, I have a completely different view. You cannot impose your view of what is need on me, it won't work without a good argument to back it up.
I certainly don't impose my view on others. The only reason I was going ad hominem was to get you on board in a more substantial manner than engaging in random discussions on the forum.
 My view is that the position that some are arguing holds: the 
 core language has to be stripped down of special casing in 
 order to make major progress.

 Aka: one step back, two steps forwards.
D is open source. Would it be possible to provide a stripped down version that satisfies you as a proof of concept? The problem is that abstract reasoning doesn't convince in IT. If you provide something concrete people can work with, then they might pick up on it.
 If it makes you happy: I am from time to time looking at 
 various ways to modify floating point behaviour, but it won't 
 really matter until complexity is cut back. Because it could 
 easily become another complexity layer on top of what is 
 already there. The best way to improve on D is not to add more 
 complexity, but to cut back to a cleaner core language.
That's good to hear. Maybe you should go ahead anyway and see if and how it could be integrated. Maybe it won't add another layer of complexity. Unless you share it, nobody can chip in their 2 cents which might lead to a good solution.
 I think you are taking a way too convenient position, somehow 
 pretending that there are no major hurdles to overcome in terms 
 of mindshare. My view is that mindshare is the most dominating 
 problem, e.g. changing viewpoints through arguments is really 
 the only option at the moment.
You mean you won't give up until everybody has the same opinion as you :) Well, that's not how things work. Maybe a more diplomatic approach would be better.
 What other options are there?
Create facts. Provide a stripped down version of D and show that it's better. You don't need to do it all by yourself. Ask like minded people to help you. I'd be interested in the result. You've praised stripped down D so much that I'm curious. I'm not ideological about things.
Jul 14 2016
parent reply Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= writes:
On Thursday, 14 July 2016 at 19:17:06 UTC, Chris wrote:
 I certainly don't impose my view on others. The only reason I 
 was going ad hominem was to get you on board in a more 
 substantial manner than engaging in random discussions on the 
 forum.
That won't change anything. It's not a man-hour problem.
 D is open source. Would it be possible to provide a stripped 
 down version that satisfies you as a proof of concept?
I have no idea. I'm only familiar with the C++ code of DMD, and it was somewhat convoluted. When DMD transitioned to D it was stated that the codebase would be refactored heavily. It would make no sense to do anything until those who are intimate with the codebase are done with refactoring. Yet, it probably would not change anything, would it? Why would anyone start on something like that without official backing?
 That's good to hear. Maybe you should go ahead anyway and see 
 if and how it could be integrated. Maybe it won't add another 
 layer of complexity.
It would, you would most likely need to add sub-typing constraints.
 You mean you won't give up until everybody has the same opinion 
 as you
That's not what I meant, and not what I said. I am looking for arguments, not opinions. If you have a good argument, good, then I learn something. If not, maybe you learn something, if you are willing. That simple.
 Maybe a more diplomatic approach would be better.
That's just words, I'm sorry, but when a position is taken that is not sustainable then it doesn't really improve anything to say «oh, you are a little bit right, except maybe not». The point is, if people are reasonably intelligent, they do pick up the argument even if they don't admit to it in the heat of the moment. So it is better to try to present a clean position. Muddying the water pretending that people are having a reasonable position doesn't really move anything, it just confirms the position they are holding. The point is not to win, or to be right, but to bring proper arguments forth, only when people do so will there be some hope of gaining insights. Excuses such as «system programming is complex therefore D must be this complex» is not a position that should be accepted. You have to get rid of this position if you want to get anywhere.
 Create facts. Provide a stripped down version of D and show 
 that it's better.
Completely unreasonable position. That would be more work than writing my own compiler since I don't have an intimate understanding of the current DMD codebase. If I had time to implement my own D compiler then I would just design my own language too... What you are expecting is out of proportions. There is also no point in turning a sketch into a DIP that I know will be shot down because it will require sub-typing. With the current situation it would be quite reasonable to shoot it down for adding complexity. And I am also not willing to propose something that won't give the language a competitive edge... because that won't be a real improvement to status quo. There are basically two options: 1. The people who said they were welcoming breaking changes need to push for a semantic cleanup of the core language so that D can continue to evolve. 2. Someone like Timon who appears to be trying to create a more formal model for D (if I haven't got it completely wrong) will have to find a clean core language underneath the current semantics that can express the current semantics (or most of it) and also be extended in desirable directions. The only thing I can do is support (1).
Jul 14 2016
parent reply Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= writes:
On Thursday, 14 July 2016 at 20:09:26 UTC, Ola Fosheim Grøstad 
wrote:
 Excuses such as «system programming is complex therefore D must 
 be this complex» is not a position that should be accepted.
And please note that this horrible excuse is propagate in the C++ community too. Time and time again people claim that C++ is complex, but it has to be like that in order to provide the features it provides. Not true for C++. Not true for D.
Jul 14 2016
parent reply Kagamin <spam here.lot> writes:
On Thursday, 14 July 2016 at 20:12:13 UTC, Ola Fosheim Grøstad 
wrote:
 And please note that this horrible excuse is propagate in the 
 C++ community too. Time and time again people claim that C++ is 
 complex, but it has to be like that in order to provide the 
 features it provides.

 Not true for C++.

 Not true for D.
Your suggestion for static analysis goes the same way: static analysis is way more complex than D currently is, but you suggest it must be this complex?
Jul 15 2016
parent reply Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= writes:
On Friday, 15 July 2016 at 09:29:27 UTC, Kagamin wrote:
 On Thursday, 14 July 2016 at 20:12:13 UTC, Ola Fosheim Grøstad 
 wrote:
 And please note that this horrible excuse is propagate in the 
 C++ community too. Time and time again people claim that C++ 
 is complex, but it has to be like that in order to provide the 
 features it provides.

 Not true for C++.

 Not true for D.
Your suggestion for static analysis goes the same way: static analysis is way more complex than D currently is, but you suggest it must be this complex?
Not sure what you mean. 1. It is more time consuming to write an analysis engine that can cover convoluted machinery than simple machinery. 2. It it more difficult to extend complex machinery than simple machinery. 3. It is more work to figure out adequate simple machinery to describe complex machinery, than just keeping things simple from the start. Not very surprising that experienced language designers try to keep the core language as simple as possible?
Jul 15 2016
parent reply Kagamin <spam here.lot> writes:
On Saturday, 16 July 2016 at 06:36:33 UTC, Ola Fosheim Grøstad 
wrote:
 Not sure what you mean.

 1. It is more time consuming to write an analysis engine that 
 can cover convoluted machinery than simple machinery.

 2. It it more difficult to extend complex machinery than simple 
 machinery.

 3. It is more work to figure out adequate simple machinery to 
 describe complex machinery, than just keeping things simple 
 from the start.

 Not very surprising that experienced language designers try to 
 keep the core language as simple as possible?
You can lower D to Assembler and analyze that. Assembler is simple, isn't it?
Jul 21 2016
next sibling parent reply Andrew Godfrey <X y.com> writes:
On Thursday, 21 July 2016 at 09:35:55 UTC, Kagamin wrote:
 On Saturday, 16 July 2016 at 06:36:33 UTC, Ola Fosheim Grøstad 
 wrote:
 Not sure what you mean.

 1. It is more time consuming to write an analysis engine that 
 can cover convoluted machinery than simple machinery.

 2. It it more difficult to extend complex machinery than 
 simple machinery.

 3. It is more work to figure out adequate simple machinery to 
 describe complex machinery, than just keeping things simple 
 from the start.

 Not very surprising that experienced language designers try to 
 keep the core language as simple as possible?
You can lower D to Assembler and analyze that. Assembler is simple, isn't it?
Are you trolling? Lowering discards information.
Jul 21 2016
parent Kagamin <spam here.lot> writes:
On Thursday, 21 July 2016 at 16:21:17 UTC, Andrew Godfrey wrote:
 You can lower D to Assembler and analyze that. Assembler is 
 simple, isn't it?
Are you trolling? Lowering discards information.
AFAIK, that's what static analysis is built for: to infer high-level properties of the code that are not expressed in it.
Jul 21 2016
prev sibling parent Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= writes:
On Thursday, 21 July 2016 at 09:35:55 UTC, Kagamin wrote:
 You can lower D to Assembler and analyze that. Assembler is 
 simple, isn't it?
You can, and that is what C++ is mostly limited to, but you then you most likely get false positives and cannot use the analysis as part of the type-system. If you are going to use e.g. pointer analysis as part of the type system, then it has to happen at a higher level.
Jul 31 2016
prev sibling parent Kagamin <spam here.lot> writes:
On Thursday, 14 July 2016 at 14:46:50 UTC, Ola Fosheim Grøstad 
wrote:
 The JVM is also a decent example of a core language that is 
 fairly stable. It as based on the StrongTalk VM.
AFAIK JVM has a design bug: can't reliably differentiate between methods to invoke the right one.
Jul 15 2016
prev sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 7/14/2016 6:26 AM, Chris wrote:
 Now, now. Where's your sense of humor?
The thing is, he's just here to troll us. His posts all follow the same pattern of relentlessly finding nothing good whatsoever in D, and we're all idiots. There's no evidence he's ever written a line of D, his examples are pulled out of context from other posts. I don't believe he's ever read the D spec. Whenever he loses a point, he reframes and tries to change the context. He's never contributed a single line of code, nor submitted a single bug report, nor any proposals. His criticisms are always non-specific and completely unactionable. Trying to engage him in a productive discussion inevitably turns into an endless, fruitless, utterly meaningless thread, likely because he doesn't actually know anything about D beyond bits and pieces gleaned from other ng postings. It's sad, considering that he's obviously knowledgeable about the academic end of CS and could be a valuable contributor. But he chooses this instead. As for me, I've decided to put him in my killfile. He's the only one in 15 years to earn that honor.
Jul 14 2016
parent reply Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= writes:
On Thursday, 14 July 2016 at 23:38:17 UTC, Walter Bright wrote:
 On 7/14/2016 6:26 AM, Chris wrote:
 Now, now. Where's your sense of humor?
The thing is, he's just here to troll us. His posts all follow the same pattern of relentlessly finding nothing good whatsoever in D, and we're all idiots.
Whoah, that's sensitive. Never called anyone an idiot, but D zealots seem to have a very low threshold for calling everyone else with a little bit of experience idiots if they see room for change in the language. The excesses of broken argumentation in this newsgroup is keeping change from coming to the language. It is apparent by now that you and Andrei quite often produce smog screens to cover your trails of broken argument chains, which only serve to defend status quo and not really lead to the language to a competitive position. And no, you are not right just because you declare it, and no if you loose an argument it is not because someone changed the topic. The sad part about D is that it could've become a major player, but is very unlikely to become one without outside help and less hostile attitude towards rather basic CS. But outside help is not really wanted. Because apparently D can become a major player by 2020 without a cleanup according to you and Andrei. It is highly unlikely for D to become a major player without language cleanup and opening more up to outside input.
Jul 16 2016
parent reply Andrew Godfrey <X y.com> writes:
On Saturday, 16 July 2016 at 07:14:03 UTC, Ola Fosheim Grøstad 
wrote:
 On Thursday, 14 July 2016 at 23:38:17 UTC, Walter Bright wrote:
 On 7/14/2016 6:26 AM, Chris wrote:
 Now, now. Where's your sense of humor?
The thing is, he's just here to troll us. His posts all follow the same pattern of relentlessly finding nothing good whatsoever in D, and we're all idiots.
Whoah, that's sensitive. Never called anyone an idiot, but D zealots seem to have a very low threshold for calling everyone else with a little bit of experience idiots if they see room for change in the language. The excesses of broken argumentation in this newsgroup is keeping change from coming to the language. It is apparent by now that you and Andrei quite often produce smog screens to cover your trails of broken argument chains, which only serve to defend status quo and not really lead to the language to a competitive position. And no, you are not right just because you declare it, and no if you loose an argument it is not because someone changed the topic. The sad part about D is that it could've become a major player, but is very unlikely to become one without outside help and less hostile attitude towards rather basic CS. But outside help is not really wanted. Because apparently D can become a major player by 2020 without a cleanup according to you and Andrei. It is highly unlikely for D to become a major player without language cleanup and opening more up to outside input.
I didn't see anyone call you an idiot either. You and Walter have both gone too far, probably because you're annoyed at each other's words and attitude: Walter called Prolog "singularly useless". You have been referring to changes that would amount to a new major version of D as "a cleanup". From the forums, my sense is that there IS a groundswell of opinion, that D2 has some major mistakes in it that can't be rectified without doing a D3, and there's a strong reaction to that idea based on experience with D1 -> D2. Perhaps what is needed is a separate area for discussion about ideas that would require a major version change. The thing about that is that it can't be done incrementally; it's the rare kind of thing that would need to be planned long in advance, and would have to amount to a huge improvement to justify even considering it.
Jul 16 2016
next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 7/16/2016 6:09 AM, Andrew Godfrey wrote:
 Walter called Prolog "singularly useless". You have been referring to changes
 that would amount to a new major version of D as "a cleanup". From the forums,
 my sense is that there IS a groundswell of opinion, that D2 has some major
 mistakes in it that can't be rectified without doing a D3, and there's a strong
 reaction to that idea based on experience with D1 -> D2. Perhaps what is needed
 is a separate area for discussion about ideas that would require a major
version
 change. The thing about that is that it can't be done incrementally; it's the
 rare kind of thing that would need to be planned long in advance, and would
have
 to amount to a huge improvement to justify even considering it.
I agree that D2 has made some fundamental mistakes. But it also got a great deal right. I haven't banned Ola from the forums, he has done nothing to deserve that. He's welcome to post here, and others are welcome to engage him.
Jul 16 2016
parent reply Andrew Godfrey <X y.com> writes:
On Saturday, 16 July 2016 at 21:35:41 UTC, Walter Bright wrote:
 On 7/16/2016 6:09 AM, Andrew Godfrey wrote:
 Walter called Prolog "singularly useless". You have been 
 referring to changes
 that would amount to a new major version of D as "a cleanup". 
 From the forums,
 my sense is that there IS a groundswell of opinion, that D2 
 has some major
 mistakes in it that can't be rectified without doing a D3, and 
 there's a strong
 reaction to that idea based on experience with D1 -> D2. 
 Perhaps what is needed
 is a separate area for discussion about ideas that would 
 require a major version
 change. The thing about that is that it can't be done 
 incrementally; it's the
 rare kind of thing that would need to be planned long in 
 advance, and would have
 to amount to a huge improvement to justify even considering it.
I agree that D2 has made some fundamental mistakes. But it also got a great deal right. I haven't banned Ola from the forums, he has done nothing to deserve that. He's welcome to post here, and others are welcome to engage him.
I'm more interested in engaging on "in how many years will the D leadership be interested in engaging on the topic of D3?" I feel this is a significant omission from the vision doc, and that omission inflames a lot of the recurring animosity I see on the forums. Even an answer of "never" would be a significant improvement over "we refuse to engage on that". And I doubt you're really thinking "never". I do think that ideas from academia will mostly cause a lot of unwanted noise in such a discussion - because academia, in my experience, is more focused on "software construction" than on "software evolution", and D takes an approach that is built on practical experience with evolution. But academia also has occasional nuggets of extreme value.
Jul 16 2016
next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 7/16/2016 7:17 PM, Andrew Godfrey wrote:
 I'm more interested in engaging on "in how many years will the D leadership be
 interested in engaging on the topic of D3?" I feel this is a significant
 omission from the vision doc, and that omission inflames a lot of the recurring
 animosity I see on the forums. Even an answer of "never" would be a significant
 improvement over "we refuse to engage on that". And I doubt you're really
 thinking "never".
There are no plans for D3 at the moment. All plans for improvement are backwards compatible as much as possible. D had its wrenching change with D1->D2, and it nearly destroyed us.
 I do think that ideas from academia will mostly cause a lot of unwanted noise
in
 such a discussion - because academia, in my experience, is more focused on
 "software construction" than on "software evolution", and D takes an approach
 that is built on practical experience with evolution. But academia also has
 occasional nuggets of extreme value.
Academia certainly does have value for us. Andrei has a PhD in computer science, I have a BS in mechanical and aerospace engineering, and I believe the difference in our backgrounds makes for great complementary skills.
Jul 16 2016
parent deadalnix <deadalnix gmail.com> writes:
On Sunday, 17 July 2016 at 05:14:57 UTC, Walter Bright wrote:
 On 7/16/2016 7:17 PM, Andrew Godfrey wrote:
 I'm more interested in engaging on "in how many years will the 
 D leadership be
 interested in engaging on the topic of D3?" I feel this is a 
 significant
 omission from the vision doc, and that omission inflames a lot 
 of the recurring
 animosity I see on the forums. Even an answer of "never" would 
 be a significant
 improvement over "we refuse to engage on that". And I doubt 
 you're really
 thinking "never".
There are no plans for D3 at the moment. All plans for improvement are backwards compatible as much as possible. D had its wrenching change with D1->D2, and it nearly destroyed us.
I think alienating the Tango crowd did way more in that reguard than any breaking change could.
Jul 16 2016
prev sibling parent reply Chris <wendlec tcd.ie> writes:
On Sunday, 17 July 2016 at 02:17:52 UTC, Andrew Godfrey wrote:
 On Saturday, 16 July 2016 at 21:35:41 UTC, Walter Bright wrote:
 On 7/16/2016 6:09 AM, Andrew Godfrey wrote:
 Walter called Prolog "singularly useless". You have been 
 referring to changes
 that would amount to a new major version of D as "a cleanup". 
 From the forums,
 my sense is that there IS a groundswell of opinion, that D2 
 has some major
 mistakes in it that can't be rectified without doing a D3, 
 and there's a strong
 reaction to that idea based on experience with D1 -> D2. 
 Perhaps what is needed
 is a separate area for discussion about ideas that would 
 require a major version
 change. The thing about that is that it can't be done 
 incrementally; it's the
 rare kind of thing that would need to be planned long in 
 advance, and would have
 to amount to a huge improvement to justify even considering 
 it.
I agree that D2 has made some fundamental mistakes. But it also got a great deal right. I haven't banned Ola from the forums, he has done nothing to deserve that. He's welcome to post here, and others are welcome to engage him.
I'm more interested in engaging on "in how many years will the D leadership be interested in engaging on the topic of D3?" I feel this is a significant omission from the vision doc, and that omission inflames a lot of the recurring animosity I see on the forums. Even an answer of "never" would be a significant improvement over "we refuse to engage on that". And I doubt you're really thinking "never". I do think that ideas from academia will mostly cause a lot of unwanted noise in such a discussion - because academia, in my experience, is more focused on "software construction" than on "software evolution", and D takes an approach that is built on practical experience with evolution. But academia also has occasional nuggets of extreme value.
The question is what is D3 supposed to be? I'm neither for nor against D3, it pops up every once in a while when people are not happy with a feature. My questions are: 1. Is there any clear vision of what D3 should look like? 2. What exactly will it fix? 3. Is there a prototype (in progress) to actually prove it will fix those things? 4. If there is (real) proof[1], would it justify a break with D2 and risk D's death? I think this topic is too serious to be just throwing in (partly academic) ideas that might or might not work in the real world. It's too serious to use D as a playground and later say "Ah well, it didn't work. [shrug]". D has left the playground and can no longer afford to just play around with ideas randomly. One has to be realistic. I'd also like to add that if we had a "clean and compact" D3, it would become more complex over time and people would want D4 to solve this, then D5 and so forth. I haven't seen any software yet that hasn't become more complex over time. Last but not least, it would help to make a list of the things D2 got right to put the whole D3 issue into proportion. [1] I.e. let's not refer to other languages in an eclectic manner. I'm asking for a proof that D works as D3 and is superior to D2.
Jul 18 2016
parent reply Andrew Godfrey <X y.com> writes:
On Monday, 18 July 2016 at 09:45:39 UTC, Chris wrote:
 On Sunday, 17 July 2016 at 02:17:52 UTC, Andrew Godfrey wrote:
 On Saturday, 16 July 2016 at 21:35:41 UTC, Walter Bright wrote:
 On 7/16/2016 6:09 AM, Andrew Godfrey wrote:
 Walter called Prolog "singularly useless". You have been 
 referring to changes
 that would amount to a new major version of D as "a 
 cleanup". From the forums,
 my sense is that there IS a groundswell of opinion, that D2 
 has some major
 mistakes in it that can't be rectified without doing a D3, 
 and there's a strong
 reaction to that idea based on experience with D1 -> D2. 
 Perhaps what is needed
 is a separate area for discussion about ideas that would 
 require a major version
 change. The thing about that is that it can't be done 
 incrementally; it's the
 rare kind of thing that would need to be planned long in 
 advance, and would have
 to amount to a huge improvement to justify even considering 
 it.
I agree that D2 has made some fundamental mistakes. But it also got a great deal right. I haven't banned Ola from the forums, he has done nothing to deserve that. He's welcome to post here, and others are welcome to engage him.
I'm more interested in engaging on "in how many years will the D leadership be interested in engaging on the topic of D3?" I feel this is a significant omission from the vision doc, and that omission inflames a lot of the recurring animosity I see on the forums. Even an answer of "never" would be a significant improvement over "we refuse to engage on that". And I doubt you're really thinking "never". I do think that ideas from academia will mostly cause a lot of unwanted noise in such a discussion - because academia, in my experience, is more focused on "software construction" than on "software evolution", and D takes an approach that is built on practical experience with evolution. But academia also has occasional nuggets of extreme value.
The question is what is D3 supposed to be? I'm neither for nor against D3, it pops up every once in a while when people are not happy with a feature. My questions are: 1. Is there any clear vision of what D3 should look like? 2. What exactly will it fix? 3. Is there a prototype (in progress) to actually prove it will fix those things? 4. If there is (real) proof[1], would it justify a break with D2 and risk D's death? I think this topic is too serious to be just throwing in (partly academic) ideas that might or might not work in the real world. It's too serious to use D as a playground and later say "Ah well, it didn't work. [shrug]". D has left the playground and can no longer afford to just play around with ideas randomly. One has to be realistic. I'd also like to add that if we had a "clean and compact" D3, it would become more complex over time and people would want D4 to solve this, then D5 and so forth. I haven't seen any software yet that hasn't become more complex over time. Last but not least, it would help to make a list of the things D2 got right to put the whole D3 issue into proportion. [1] I.e. let's not refer to other languages in an eclectic manner. I'm asking for a proof that D works as D3 and is superior to D2.
We risk scaring away potential community members, and alienating existing ones, by the way we say "no" to proposals for breaking changes. We could improve how we say "no", by having a place to point people to. Potential topics: 1) As you say, a vision for D3. Maybe just a summary of the things that are now agreed upon, e.g. autodecoding (though even there, I think the details of where to move to, are still contentious. E.g. I personally dislike the convention of "char" meaning a 1-byte data type but I think some others like it). 2) The case against incremental breaking changes. (I see this argument somewhat, though it applies less to "dfixable" breaking changes). 3) Why we feel that breaking changes risk killing D outright. (I just don't see it. I wonder if we're confusing "dfixable" breaking changes, with other more disruptive kinds (such as Tango=>Phobos).)
Jul 18 2016
next sibling parent jmh530 <john.michael.hall gmail.com> writes:
On Monday, 18 July 2016 at 13:48:16 UTC, Andrew Godfrey wrote:
 1) As you say, a vision for D3. Maybe just a summary of the 
 things that are now agreed upon, e.g. autodecoding (though even 
 there, I think the details of where to move to, are still 
 contentious. E.g. I personally dislike the convention of "char" 
 meaning a 1-byte data type but I think some others like it).

 2) The case against incremental breaking changes. (I see this 
 argument somewhat, though it applies less to "dfixable" 
 breaking changes).

 3) Why we feel that breaking changes risk killing D outright. 
 (I just don't see it. I wonder if we're confusing "dfixable" 
 breaking changes, with other more disruptive kinds (such as 
 Tango=>Phobos).)
I wasn't around for the D1 to D2 change, but I was around for Python 2 to Python 3, which was inconvenient. My sense is that a lot of the things mentioned here are "woulda-coulda-shoulda", like having defaults be safe instead of system. Would have been nice to have from the beginning, but just seems way too disruptive to change it now. However, I don't have any particular issue with incremental breaking changes that are dfixable. But I think that saving them all up to do a huge D3 is potentially more disruptive than doing a small D3, completely dfixable, then a small D4, etc. Even a D3 that just changed autodecoding (which I don't think is dixable, but who knows) would be good as it would be just a small limited breaking change.
Jul 18 2016
prev sibling next sibling parent reply Mathias Lang via Digitalmars-d <digitalmars-d puremagic.com> writes:
2016-07-18 15:48 GMT+02:00 Andrew Godfrey via Digitalmars-d <
digitalmars-d puremagic.com>:

 We risk scaring away potential community members, and alienating existing
 ones, by the way we say "no" to proposals for breaking changes. We could
 improve how we say "no", by having a place to point people to. Potential
 topics:
 [...]
I've never seen a definitive "No" to breaking changes. We do breaking changes all the time. Did everyone already forget what the latest release (2.071.0) was about ? Revamping the import system, one of the core component of the language. But it took a lot of time, and experience, to do it. It did deprecate patterns people were using for a long time before (e.g. inheriting imports), but its a (almost) consistent and principled implementation. Way too often I see suggestions for a change with one (or more) of the following mistakes: - Want to bring a specific construct in the language rather than achieve a goal - Only consider the pros of such a proposal and completely skip any cons analysis - Focus on one single change without considering how it could affect the whole language But I've never seen someone willing to put the effort in a proposal to improve the language be turned away. In fact, our review process for language change was recently updated as well to make it more accessible and save everyone's time. If it's not a commitment for continuous improvement of the language, I don't know what it is.
Jul 18 2016
parent reply Chris <wendlec tcd.ie> writes:
On Monday, 18 July 2016 at 18:03:49 UTC, Mathias Lang wrote:
 2016-07-18 15:48 GMT+02:00 Andrew Godfrey via Digitalmars-d < 
 digitalmars-d puremagic.com>:


 I've never seen a definitive "No" to breaking changes.
 We do breaking changes all the time. Did everyone already 
 forget what the
 latest release (2.071.0) was about ? Revamping the import 
 system, one of
 the core component of the language.
 But it took a lot of time, and experience, to do it. It did 
 deprecate
 patterns people were using for a long time before (e.g. 
 inheriting
 imports), but its a (almost) consistent and principled 
 implementation.
Although it can be a PITA, people accept breaking changes, if they really make sense.
 Way too often I see suggestions for a change with one (or more) 
 of the
 following mistakes:
 - Want to bring a specific construct in the language rather 
 than achieve a
 goal
 - Only consider the pros of such a proposal and completely skip 
 any cons
 analysis
 - Focus on one single change without considering how it could 
 affect the
 whole language
That's also my impression. Given that D is open source I'm surprised that nobody has grabbed it and come up with a prototype of D3 or whatever. How else could you prove your case? After all the onus of proof is on the one who proposes a change. Don't just sit and wait until Walter says "Go ahead", knowing full well that the core devs are in no position to dedicate time to D3 at the moment - that's too easy and it gets us nowhere.
 But I've never seen someone willing to put the effort in a 
 proposal to
 improve the language be turned away.
 In fact, our review process for language change was recently 
 updated as
 well to make it more accessible and save everyone's time. If 
 it's not a
 commitment for continuous improvement of the language, I don't 
 know what it
 is.
Jul 19 2016
parent reply Andrew Godfrey <X y.com> writes:
On Tuesday, 19 July 2016 at 09:49:50 UTC, Chris wrote:
 On Monday, 18 July 2016 at 18:03:49 UTC, Mathias Lang wrote:
 2016-07-18 15:48 GMT+02:00 Andrew Godfrey via Digitalmars-d < 
 digitalmars-d puremagic.com>:


 I've never seen a definitive "No" to breaking changes.
 We do breaking changes all the time. Did everyone already 
 forget what the
 latest release (2.071.0) was about ? Revamping the import 
 system, one of
 the core component of the language.
 But it took a lot of time, and experience, to do it. It did 
 deprecate
 patterns people were using for a long time before (e.g. 
 inheriting
 imports), but its a (almost) consistent and principled 
 implementation.
Although it can be a PITA, people accept breaking changes, if they really make sense.
 Way too often I see suggestions for a change with one (or 
 more) of the
 following mistakes:
 - Want to bring a specific construct in the language rather 
 than achieve a
 goal
 - Only consider the pros of such a proposal and completely 
 skip any cons
 analysis
 - Focus on one single change without considering how it could 
 affect the
 whole language
That's also my impression. Given that D is open source I'm surprised that nobody has grabbed it and come up with a prototype of D3 or whatever. How else could you prove your case? After all the onus of proof is on the one who proposes a change. Don't just sit and wait until Walter says "Go ahead", knowing full well that the core devs are in no position to dedicate time to D3 at the moment - that's too easy and it gets us nowhere.
 But I've never seen someone willing to put the effort in a 
 proposal to
 improve the language be turned away.
 In fact, our review process for language change was recently 
 updated as
 well to make it more accessible and save everyone's time. If 
 it's not a
 commitment for continuous improvement of the language, I don't 
 know what it
 is.
We all seem to be in agreement that people often make breaking-change proposals without considering the impact properly. I have seen this many times on the forums and not once (so far) has the reply been simply, "please go and read the section of our vision doc that talks about breaking changes". I'm suggesting that is what should happen. Instead, I have seen each time a disussion thread that explores only parts of the topic of breaking changes, and does so badly. Just like earlier in this thread, where I mentined dfixable breaking changes and Walter implied that even though a would cause people to have to manually rewrite. (This example is a specific bias I have noticed in other threads: arguing about a breaking change without evaluating whether it is dfixable).
Jul 19 2016
parent reply Jack Stouffer <jack jackstouffer.com> writes:
On Tuesday, 19 July 2016 at 15:22:19 UTC, Andrew Godfrey wrote:
 Just like earlier in this thread, where I mentined dfixable 
 breaking changes and Walter implied that even though a would 
 cause people to have to manually rewrite.
Something being dfix-able is not enough for the simple reason that legacy code in D is already becoming a thing, despite D2 only existing for nine years. A complaint has arisen many times in the forum and in DConfs that there are many packages on code.dlang.org or on Github that don't compile because the author stopped maintaining them. In many cases, these repo's have only been unmaintained for a year(!) or less, and they already don't compile. There's no way for anyone other than the original author that can fix this; all we can do is add a warning on code.dlang.org. All the while it reduces the signal to noise ratio of good to bad D code online. Every breakage needs to take into account the baggage of visible old code. There's also the point that there are few changes which can be dfix-able (according to its author).
Jul 20 2016
next sibling parent Andrew Godfrey <X y.com> writes:
On Wednesday, 20 July 2016 at 20:12:14 UTC, Jack Stouffer wrote:
 On Tuesday, 19 July 2016 at 15:22:19 UTC, Andrew Godfrey wrote:
 [...]
Something being dfix-able is not enough for the simple reason that legacy code in D is already becoming a thing, despite D2 only existing for nine years. A complaint has arisen many times in the forum and in DConfs that there are many packages on code.dlang.org or on Github that don't compile because the author stopped maintaining them. In many cases, these repo's have only been unmaintained for a year(!) or less, and they already don't compile. [...]
Thanks for the explanation. If most changes aren't dfixable (or aren't believed to be), that explains why the discussions I've read don't mention the dfix approach.
Jul 20 2016
prev sibling parent Kagamin <spam here.lot> writes:
That's an interesting outcome that backward compatibility matters 
for hobby users more than for commercial users :)
Jul 21 2016
prev sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 7/18/2016 6:48 AM, Andrew Godfrey wrote:
 We risk scaring away potential community members, and alienating existing ones,
 by the way we say "no" to proposals for breaking changes. We could improve how
 we say "no", by having a place to point people to. Potential topics:
Anything we do will risk scaring away people. The only answer is we have to do what is right.
 3) Why we feel that breaking changes risk killing D outright. (I just don't see
 it. I wonder if we're confusing "dfixable" breaking changes, with other more
 disruptive kinds (such as Tango=>Phobos).)
Because if you thoroughly break a person's code, you put them in a position of rewriting it, and they may not choose to rewrite it in D3. They may choose a more stable language. I have many older programs in different languages. It's nice if they recompile and work. It's not nice if I have to go figure out how they work again so I can get them to work again.
Jul 18 2016
parent Charles Hixson via Digitalmars-d <digitalmars-d puremagic.com> writes:
On 07/18/2016 03:37 PM, Walter Bright via Digitalmars-d wrote:
 On 7/18/2016 6:48 AM, Andrew Godfrey wrote:
 We risk scaring away potential community members, and alienating 
 existing ones,
 by the way we say "no" to proposals for breaking changes. We could 
 improve how
 we say "no", by having a place to point people to. Potential topics:
Anything we do will risk scaring away people. The only answer is we have to do what is right.
 3) Why we feel that breaking changes risk killing D outright. (I just 
 don't see
 it. I wonder if we're confusing "dfixable" breaking changes, with 
 other more
 disruptive kinds (such as Tango=>Phobos).)
Because if you thoroughly break a person's code, you put them in a position of rewriting it, and they may not choose to rewrite it in D3. They may choose a more stable language. I have many older programs in different languages. It's nice if they recompile and work. It's not nice if I have to go figure out how they work again so I can get them to work again.
For some changes there could be switches, rather like optimization level switches, or those managed by version. This would allow the compilation version to be set based on a compile time variable, I'm not totally sure whether this should be file level or, as with version, block level...or selectable. This would get to be a huge maintenance chore after awhile, but it would allow you a few years to deprecate code. The question is "How important would a change need to be to justify this kind of action?", and my guess is that it would need to be pretty important.
Jul 21 2016
prev sibling parent reply Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= writes:
On Saturday, 16 July 2016 at 13:09:22 UTC, Andrew Godfrey wrote:
 ideas that would require a major version change. The thing 
 about that is that it can't be done incrementally; it's the 
 rare kind of thing that would need to be planned long in 
 advance, and would have to amount to a huge improvement to 
 justify even considering it.
It does not need to be planned long in advance, it only requires official backing as a side project. They could freeze current D2 as a stable release and also work on a cleanup. Instead you get people working on their own forks (myself included), or spin-off languages that goes nowhere. Because you need momentum. As a result neither D or the spin-offs gain momentum. And there are several spin-offs (some dead). In the meantime low level languages like C++, Rust and semi-high level languages like Swift cuts into the D feature set from both sides. C++ is clogged up by backwards-compatibility issues, but they are also smoothing out the rough edges where D once had clear advantages. Especially in the areas of convenience. C++14/C++17 are not very exciting in terms of features, but the additions are removing what people now seem to call «friction». In order to stay competitive over time you need something significantly better, like stronger typing, which depends on static analysis, which requires a simple identifiable core (not a simple language, but a simple core language after you remove syntactic sugar). One possible valuable addition would have been restricted refinement types, another is solid pointer analysis (without false positives). Neither are incompatible with D as such, but you would probably need to attract compiler developers with a theoretical background to get it up in reasonable time. Without a clean core they will conclude that the it will be too much work and they will go to other big languages instead or work on other alternative languages that also go nowhere because they lack momentum...
Jul 21 2016
parent reply Andrew Godfrey <X y.com> writes:
On Thursday, 21 July 2016 at 08:40:03 UTC, Ola Fosheim Grøstad 
wrote:
 On Saturday, 16 July 2016 at 13:09:22 UTC, Andrew Godfrey wrote:
 ideas that would require a major version change. The thing 
 about that is that it can't be done incrementally; it's the 
 rare kind of thing that would need to be planned long in 
 advance, and would have to amount to a huge improvement to 
 justify even considering it.
It does not need to be planned long in advance, it only requires official backing as a side project. They could freeze current D2 as a stable release and also work on a cleanup. Instead you get people working on their own forks (myself included), or spin-off languages that goes nowhere. Because you need momentum. As a result neither D or the spin-offs gain momentum. And there are several spin-offs (some dead).
You seem to be assuming that everyone already agrees on which set of changes should be made to the language. (Otherwise, how could you expect anyone to "officially back" a side project?) But agreeing on which changes to make and, especially, which to NOT make, is the hard part. And it's why you'd need a lot of planning & discussion up front (if any of us non-founders wanted to participate). And many people don't understand this, which IMO is behind a lot of hard feelings in the forums.
Jul 21 2016
parent Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= writes:
On Thursday, 21 July 2016 at 16:39:18 UTC, Andrew Godfrey wrote:
 You seem to be assuming that everyone already agrees on which 
 set of changes should be made to the language. (Otherwise, how 
 could you expect anyone to "officially back" a side project?)

 But agreeing on which changes to make and, especially, which to 
 NOT make, is the hard part. And it's why you'd need a lot of 
 planning & discussion up front (if any of us non-founders 
 wanted to participate). And many people don't understand this, 
 which IMO is behind a lot of hard feelings in the forums.
The basic idea would be not to radically change the language, but come down to a clean core and build the existing useful concepts on top of that core. A year ago or so it was claimed that the compiler core would would be refactored and before that it was asked in the forum if current users would prefer non-breaking changes or a clean up. My impression was that the majority was willing to take some breaking changes in order to get a more streamlined experience. Anyway, it is summertime, maybe we can discuss this later in the autumn ;-). (I don't have time to follow the forums.)
Jul 31 2016
prev sibling next sibling parent reply Shachar Shemesh <shachar weka.io> writes:
On 12/07/16 08:33, Andrei Alexandrescu wrote:
 On 07/12/2016 01:15 AM, Shachar Shemesh wrote:
 The topic was reference counting's interaction with immutable (see
 deadalnix's comment, to which I completely agree, about inter-features
 interactions).
Amaury failed to produce an example to support his point, aside from a rehash of a bug report from 2013 that is virtually fixed. Do you have any?
UFCS: Anywhere you can do "func(a)" you can also do "a.func()" and vice versa. Operator ->: Not needed, as we know this is a pointer to a struct. We automatically dereference with the dot operator. struct A { void method() {} } void main() { A* a; a.method(); // Okay method(a); // Not okay }
 When asked (by me) how you intend to actually solve this,
 you said that since you know where the memory comes from, you will cast
 away the immutability.

 Casting away immutability is UB in D.
I understand. There is an essential detail that sadly puts an anticlimactic end to the telenovela. The unsafe cast happens at allocator level. Inside any memory allocator, there is a point at which behavior outside the type system happens: memory that is untyped becomes typed, and vice versa (during deallocation).
Nah, this is cut and dried. You should just continue being nicely turbed. "Casting away immutability has undefined behavior" is what it is. [1] It is quite okay, and even unavoidable, to go outside the type system inside an allocator. It something else entirely to invoke UB. The C++ definition is quite solid. Casting away constness is UB IFF the buffer was originally const. In this case, your allocator does two UBs. One when allocating (casting a mutable byte range to immutable reference), and another when deallocating. Both are defined as undefined by D, which means the compiler is free to wreak havoc in both without you having the right to complain. Which leads me to the conclusion that you cannot write an allocator in D. I doubt that's a conclusion you'd stand behind. Shachar 1 - https://forum.dlang.org/post/nlsbtr$2vaq$1 digitalmars.com
Jul 12 2016
next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 7/12/2016 12:35 AM, Shachar Shemesh wrote:
 UFCS: Anywhere you can do "func(a)" you can also do "a.func()" and vice
 versa.

 Operator ->: Not needed, as we know this is a pointer to a struct. We
 automatically dereference with the dot operator.

 struct A {
      void method() {}
 }

 void main() {
      A* a;

      a.method(); // Okay
      method(a);  // Not okay
 }
I'm afraid I don't know what you're driving at with those examples.
Jul 12 2016
parent reply Shachar Shemesh <shachar weka.io> writes:
On 12/07/16 13:25, Walter Bright wrote:
 On 7/12/2016 12:35 AM, Shachar Shemesh wrote:
 UFCS: Anywhere you can do "func(a)" you can also do "a.func()" and vice
 versa.

 Operator ->: Not needed, as we know this is a pointer to a struct. We
 automatically dereference with the dot operator.

 struct A {
      void method() {}
 }

 void main() {
      A* a;

      a.method(); // Okay
      method(a);  // Not okay
 }
I'm afraid I don't know what you're driving at with those examples.
It is a single example. It shows that when UCFS and the lack of operator -> try to play together, the result is no longer as simple and elegant as one tries to sell them. It was given as a response to Andrei's request for examples of cross-features interference causing complexity. Shachar
Jul 12 2016
next sibling parent ag0aep6g <anonymous example.com> writes:
On 07/12/2016 12:55 PM, Shachar Shemesh wrote:
 On 12/07/16 13:25, Walter Bright wrote:
 On 7/12/2016 12:35 AM, Shachar Shemesh wrote:
[...]
 struct A {
      void method() {}
 }

 void main() {
      A* a;

      a.method(); // Okay
      method(a);  // Not okay
 }
I'm afraid I don't know what you're driving at with those examples.
It is a single example. It shows that when UCFS and the lack of operator -> try to play together, the result is no longer as simple and elegant as one tries to sell them. It was given as a response to Andrei's request for examples of cross-features interference causing complexity.
There is no UFCS in that example, and the -> operator would only affect the "Okay" case. The "Not okay" case fails because there is no free function "method". It would fail even if D didn't have UFCS and if it had the -> operator.
Jul 12 2016
prev sibling parent Walter Bright <newshound2 digitalmars.com> writes:
On 7/12/2016 3:55 AM, Shachar Shemesh wrote:
 On 12/07/16 13:25, Walter Bright wrote:
 On 7/12/2016 12:35 AM, Shachar Shemesh wrote:
 UFCS: Anywhere you can do "func(a)" you can also do "a.func()" and vice
 versa.

 Operator ->: Not needed, as we know this is a pointer to a struct. We
 automatically dereference with the dot operator.

 struct A {
      void method() {}
 }

 void main() {
      A* a;

      a.method(); // Okay
      method(a);  // Not okay
 }
I'm afraid I don't know what you're driving at with those examples.
It is a single example. It shows that when UCFS and the lack of operator -> try to play together, the result is no longer as simple and elegant as one tries to sell them. It was given as a response to Andrei's request for examples of cross-features interference causing complexity.
I assumed you were talking about UB or unsafe behavior. Thanks for the clarification. As to the specific case here, the spec doesn't say "vice versa": "A free function can be called with a syntax that looks as if the function were a member function of its first parameter type." http://dlang.org/spec/function.html#pseudo-member Not the other way.
Jul 12 2016
prev sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 07/12/2016 03:35 AM, Shachar Shemesh wrote:
 On 12/07/16 08:33, Andrei Alexandrescu wrote:
 On 07/12/2016 01:15 AM, Shachar Shemesh wrote:
 The topic was reference counting's interaction with immutable (see
 deadalnix's comment, to which I completely agree, about inter-features
 interactions).
Amaury failed to produce an example to support his point, aside from a rehash of a bug report from 2013 that is virtually fixed. Do you have any?
UFCS: Anywhere you can do "func(a)" you can also do "a.func()" and vice versa. Operator ->: Not needed, as we know this is a pointer to a struct. We automatically dereference with the dot operator. struct A { void method() {} } void main() { A* a; a.method(); // Okay method(a); // Not okay }
Thanks. I must have misunderstood - I was looking for something that's not safe.
 When asked (by me) how you intend to actually solve this,
 you said that since you know where the memory comes from, you will cast
 away the immutability.

 Casting away immutability is UB in D.
I understand. There is an essential detail that sadly puts an anticlimactic end to the telenovela. The unsafe cast happens at allocator level. Inside any memory allocator, there is a point at which behavior outside the type system happens: memory that is untyped becomes typed, and vice versa (during deallocation).
Nah, this is cut and dried. You should just continue being nicely turbed. "Casting away immutability has undefined behavior" is what it is. [1]
AffixAllocator is not casting away immutability - that's the beauty of it. But I'm all for making the language more precise to allow the kind of work AffixAllocator does portably. Would love some help from you there!
 It is quite okay, and even unavoidable, to go outside the type system
 inside an allocator. It something else entirely to invoke UB.

 The C++ definition is quite solid. Casting away constness is UB IFF the
 buffer was originally const.
Yeah, we might relax that in D as well, albeit for different reasons.
 In this case, your allocator does two UBs. One when allocating (casting
 a mutable byte range to immutable reference), and another when
 deallocating. Both are defined as undefined by D, which means the
 compiler is free to wreak havoc in both without you having the right to
 complain.

 Which leads me to the conclusion that you cannot write an allocator in
 D. I doubt that's a conclusion you'd stand behind.
Again, your help with improving the language definition would be very welcome. Obviously we do want to have AffixAllocator and other allocators work properly. Andrei
Jul 12 2016
parent reply Shachar Shemesh <shachar weka.io> writes:
On 12/07/16 17:26, Andrei Alexandrescu wrote:

 Thanks. I must have misunderstood - I was looking for something that's
 not  safe.
No, I was referring to his statement that features in D tend to create complexity and unexpected/non-intuitive behavior when combined. Sorry if I was not clear.
 AffixAllocator is not casting away immutability - that's the beauty of
 it. But I'm all for making the language more precise to allow the kind
 of work AffixAllocator does portably. Would love some help from you there!
This is a bit academic, but I don't understand how you can get an immutable/const pointer to memory, and then get a pointer to a mutable uint out of it without casting away the constness. Yes, you are subtracting the pointer so it points to outside the original memory, but technically speaking (which is all the compiler really sees, at this point), you are casting "immutable MyClass*" to "uint*". It is only semantically that you know this is fine. Saw your reference to the code in a different comment. I believe (and I might be wrong) the cast in question to be here: https://github.com/dlang/phobos/blob/master/std/experimental/allocator/building_blocks/affix_allocator.d#L213 The input might be CI. The output is mutable. If such cast is UB, then the compiler is free to say "this is nonsense, I'm not going to do it". If we use the C++'s UB definition, the compiler can say "I hereby assume that the buffer is actually mutable". This is, potentially, completely different code generation.
 The C++ definition is quite solid. Casting away constness is UB IFF the
 buffer was originally const.
Yeah, we might relax that in D as well, albeit for different reasons.
I would love to hear about what are D's reasons, and in what way they are different than C++'s. To clarify, the previous sentence was meant to be read with no cynicism intended.
 In this case, your allocator does two UBs. One when allocating (casting
 a mutable byte range to immutable reference), and another when
 deallocating. Both are defined as undefined by D, which means the
 compiler is free to wreak havoc in both without you having the right to
 complain.

 Which leads me to the conclusion that you cannot write an allocator in
 D. I doubt that's a conclusion you'd stand behind.
Again, your help with improving the language definition would be very welcome. Obviously we do want to have AffixAllocator and other allocators work properly.
I was thinking about intrusive reference counting, which is the classic case I'd use "mutable" in C++. In that case, the value we're mutating actually is a member of the struct that was passed as const (I think immutable isn't an issue in that use case). I think a great first step is to relax the UB around casting away CI modifiers where we know, semantically, that the underlying memory is actually mutable. Shachar
Jul 12 2016
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 07/12/2016 10:53 AM, Shachar Shemesh wrote:
 On 12/07/16 17:26, Andrei Alexandrescu wrote:

 Thanks. I must have misunderstood - I was looking for something that's
 not  safe.
No, I was referring to his statement that features in D tend to create complexity and unexpected/non-intuitive behavior when combined. Sorry if I was not clear.
I agree we have a few of those, though I don't think your example is particularly egregious.
 AffixAllocator is not casting away immutability - that's the beauty of
 it. But I'm all for making the language more precise to allow the kind
 of work AffixAllocator does portably. Would love some help from you
 there!
Saw your reference to the code in a different comment. I believe (and I might be wrong) the cast in question to be here: https://github.com/dlang/phobos/blob/master/std/experimental/allocator/building_blocks/affix_allocator.d#L213
That's the code.
 The input might be CI. The output is mutable.
No. It's important to understand that the "input" and "output" are distinct because of the [-1]. That gets into memory that was never typed as unsafe. This is a layout matter. It says that if you happen to know some immutable data sits 8 bytes to the right of some mutable data, doing the appropriate pointer arithmetic on the immutable data takes you correctly to the mutable data. I'm all for adding language to the spec to clarify that.
 If such cast is UB, then the compiler is free to say "this is nonsense,
 I'm not going to do it".
It's not UB.
 If we use the C++'s UB definition, the compiler
 can say "I hereby assume that the buffer is actually mutable". This is,
 potentially, completely different code generation.
I understand.
 I was thinking about intrusive reference counting, which is the classic
 case I'd use "mutable" in C++. In that case, the value we're mutating
 actually is a member of the struct that was passed as const (I think
 immutable isn't an issue in that use case).
You'll need to use AffixAllocator to do intrusive RC.
 I think a great first step is to relax the UB around casting away CI
 modifiers where we know, semantically, that the underlying memory is
 actually mutable.
That would not be the right thing to do. Andrei
Jul 12 2016
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 7/12/2016 8:04 AM, Andrei Alexandrescu wrote:
 That would not be the right thing to do.
Another wrinkle is that immutable data can be shared, and casting away immutability means it won't be synchronized anymore. So doing this would require knowledge that other threads aren't accessing it.
Jul 12 2016
parent Shachar Shemesh <shachar weka.io> writes:
On 13/07/16 03:29, Walter Bright wrote:
So doing this would
 require knowledge that other threads aren't accessing it.
You say that as if it's a bad thing. Yes, casting away protection should be done only when you know what you're doing. If you make a mistake, bad things will happen. I think a system programming language cannot prevent the user from doing dangerous things. You will simply not be leaving enough rope. Shachar
Jul 12 2016
prev sibling next sibling parent Steven Schveighoffer <schveiguy yahoo.com> writes:
On 7/12/16 1:33 AM, Andrei Alexandrescu wrote:

 The solution (very ingenious, due to dicebot) in fact does not quite
 cast immutability away. Starting from a possibly immutable pointer, it
 subtracts an offset from it. At that point the memory is not tracked by
 the type system, but known to the allocator to contain metadata
 associated with the pointer that had been allocated with it. After the
 subtraction, the cast exposes the data which is mutable without
 violating the immutability of the object proper. As I said, it's quite
 an ingenious solution.
Ahem: https://forum.dlang.org/post/ft5a6g$1519$1 digitalmars.com -Steve
Jul 12 2016
prev sibling next sibling parent reply "H. S. Teoh via Digitalmars-d" <digitalmars-d puremagic.com> writes:
On Tue, Jul 12, 2016 at 01:33:00AM -0400, Andrei Alexandrescu via Digitalmars-d
wrote:
 On 07/12/2016 01:15 AM, Shachar Shemesh wrote:
[...]
 Casting away immutability is UB in D.
I understand. There is an essential detail that sadly puts an anticlimactic end to the telenovela. The unsafe cast happens at allocator level.
[...] What's an "unsafe cast"? I think we're mixing up terminology here, which is not helping this discussion. Is casting away immutable merely *unsafe*, or is it UB? Because if it's UB (as understood by the rest of the world), then your statement essentially amounts to saying that allocators are UB. Which in turn means that optimizing compilers are free to assume that allocaters are impossible (since they are UB and the compiler is therefore free to do whatever it wants there, such as assume that it cannot ever happen), and, in all likelihood, output garbage in the executable as a result. If you don't mean UB in this sense of the term, then you (well, the D language spec) need to define what exactly is supposed to happen when immutable is cast away. Exactly when is such a cast UB, and when is it *not* UB? (I'm assuming that casting away immutable in the general case is UB, e.g., if the compiler puts such memory in ROM. But since this isn't always the case, e.g., you're allocating a block of mutable memory from RAM but designating it as immutable for the purposes of the type system, then the spec needs to specify exactly when such casts will not result in UB, to allow room for allocators to be implementable. If all the spec says is a blanket statement that such casts are UB, then by definition of UB all such allocator code is invalid, and an optimizing compiler is free to "optimize" it away (with disastrous results).) Or perhaps what you *really* mean is that casting away immutable is *implementation-defined*, not UB. The two are not the same thing. (But even then, you may still run into trouble with implementations that define a behaviour that doesn't match what, e.g., the allocator code assumes. These things need to be explicitly stated in the spec so that implementors won't do something outside of what you intended -- whether deliberate or as a misunderstanding of what the intent of the spec was.) T -- There is no gravity. The earth sucks.
Jul 12 2016
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 07/12/2016 10:26 AM, H. S. Teoh via Digitalmars-d wrote:
 On Tue, Jul 12, 2016 at 01:33:00AM -0400, Andrei Alexandrescu via
Digitalmars-d wrote:
 On 07/12/2016 01:15 AM, Shachar Shemesh wrote:
[...]
 Casting away immutability is UB in D.
I understand. There is an essential detail that sadly puts an anticlimactic end to the telenovela. The unsafe cast happens at allocator level.
[...] What's an "unsafe cast"? I think we're mixing up terminology here, which is not helping this discussion. Is casting away immutable merely *unsafe*, or is it UB?
The subtlety here is that immutable is not being cast away. All data that is typed as immutable stays immutable. -- Andrei
Jul 12 2016
next sibling parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 07/12/2016 10:40 AM, Andrei Alexandrescu wrote:
 On 07/12/2016 10:26 AM, H. S. Teoh via Digitalmars-d wrote:
 On Tue, Jul 12, 2016 at 01:33:00AM -0400, Andrei Alexandrescu via
 Digitalmars-d wrote:
 On 07/12/2016 01:15 AM, Shachar Shemesh wrote:
[...]
 Casting away immutability is UB in D.
I understand. There is an essential detail that sadly puts an anticlimactic end to the telenovela. The unsafe cast happens at allocator level.
[...] What's an "unsafe cast"? I think we're mixing up terminology here, which is not helping this discussion. Is casting away immutable merely *unsafe*, or is it UB?
The subtlety here is that immutable is not being cast away. All data that is typed as immutable stays immutable. -- Andrei
To clarify, this is the code in question: https://github.com/dlang/phobos/blob/master/std/experimental/allocator/building_blocks/affix_allocator.d#L210 The parameter is an array of a generic type T (which may be immutable). The immutability of data is not compromised; the pointer to the beginning of the array is only used as a pivot to access data that sits before the array. That data has never been typed as immutable, so the code is typed correctly (assuming the data had been allocated with this allocator), even though the compiler cannot prove it. The language definition must allow this to work. But this is not a matter of changing the definition of immutable. It's a simple matter of data layout (e.g. if you add a positive number to a pointer and later you subtract it, you get the same pointer etc). We already have that down, even if the spec language could be better. Andrei
Jul 12 2016
prev sibling parent reply Steven Schveighoffer <schveiguy yahoo.com> writes:
On 7/12/16 10:40 AM, Andrei Alexandrescu wrote:
 On 07/12/2016 10:26 AM, H. S. Teoh via Digitalmars-d wrote:
 On Tue, Jul 12, 2016 at 01:33:00AM -0400, Andrei Alexandrescu via
 Digitalmars-d wrote:
 On 07/12/2016 01:15 AM, Shachar Shemesh wrote:
[...]
 Casting away immutability is UB in D.
I understand. There is an essential detail that sadly puts an anticlimactic end to the telenovela. The unsafe cast happens at allocator level.
[...] What's an "unsafe cast"? I think we're mixing up terminology here, which is not helping this discussion. Is casting away immutable merely *unsafe*, or is it UB?
The subtlety here is that immutable is not being cast away. All data that is typed as immutable stays immutable. -- Andrei
A related question: are we planning on making such access pure (or even allowing compiler to infer purity)? If so, we may have issues... -Steve
Jul 12 2016
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 07/12/2016 11:07 AM, Steven Schveighoffer wrote:
 A related question: are we planning on making such access pure (or even
 allowing compiler to infer purity)? If so, we may have issues...
Was that the link you posted? What's a summary of the issues and what do you think would be a proper way to address them? Thanks! -- Andrei
Jul 12 2016
parent reply Steven Schveighoffer <schveiguy yahoo.com> writes:
On 7/12/16 12:01 PM, Andrei Alexandrescu wrote:
 On 07/12/2016 11:07 AM, Steven Schveighoffer wrote:
 A related question: are we planning on making such access pure (or even
 allowing compiler to infer purity)? If so, we may have issues...
Was that the link you posted? What's a summary of the issues and what do you think would be a proper way to address them? Thanks! -- Andrei
No, the link I posted was a poor proposal by a (much?) younger me to do a similar thing to the affix allocator (but on the language level). Apparently, I didn't have enough cred back then :) The issue I'm referring to is the compiler eliding calls. For example, let's say you have a reference counted type: RC(T) { T *value } And T is immutable. So far so good. Now, we do this: RC(T) { alias MyAllocator = ...; // some form of affixallocator void incRef() { MyAllocator.prefix(value)++; } } Seems innocuous enough. However, if the compiler interprets incRef to be pure, and notices that "hey, all the parameters to this function are immutable, and it returns void! I don't have to call this, win-win!" Then we have a problem. I raised similar points when C free was made pure (but to no avail). It's not necessarily an unfixable problem, but we may need some language help to guarantee these aren't elided. -Steve
Jul 12 2016
parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 7/12/16 12:12 PM, Steven Schveighoffer wrote:
 On 7/12/16 12:01 PM, Andrei Alexandrescu wrote:
 On 07/12/2016 11:07 AM, Steven Schveighoffer wrote:
 A related question: are we planning on making such access pure (or even
 allowing compiler to infer purity)? If so, we may have issues...
Was that the link you posted? What's a summary of the issues and what do you think would be a proper way to address them? Thanks! -- Andrei
No, the link I posted was a poor proposal by a (much?) younger me to do a similar thing to the affix allocator (but on the language level). Apparently, I didn't have enough cred back then :)
In all likelihood it must have been me who didn't get the implications. -- Andrei
Jul 12 2016
prev sibling parent reply Jacob Carlborg <doob me.com> writes:
On 2016-07-12 07:33, Andrei Alexandrescu wrote:

 The solution (very ingenious, due to dicebot) in fact does not quite
 cast immutability away. Starting from a possibly immutable pointer, it
 subtracts an offset from it. At that point the memory is not tracked by
 the type system, but known to the allocator to contain metadata
 associated with the pointer that had been allocated with it. After the
 subtraction, the cast exposes the data which is mutable without
 violating the immutability of the object proper. As I said, it's quite
 an ingenious solution.
What if the immutable data is stored in ROM [1]? I assume it's not possible to have an offset to a completely different memory storage. Not sure if this is important enough to care about. [1] http://dlang.org/const-faq.html#invariant -- /Jacob Carlborg
Jul 12 2016
next sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 07/12/2016 02:45 PM, Jacob Carlborg wrote:
 On 2016-07-12 07:33, Andrei Alexandrescu wrote:

 The solution (very ingenious, due to dicebot) in fact does not quite
 cast immutability away. Starting from a possibly immutable pointer, it
 subtracts an offset from it. At that point the memory is not tracked by
 the type system, but known to the allocator to contain metadata
 associated with the pointer that had been allocated with it. After the
 subtraction, the cast exposes the data which is mutable without
 violating the immutability of the object proper. As I said, it's quite
 an ingenious solution.
What if the immutable data is stored in ROM [1]? I assume it's not possible to have an offset to a completely different memory storage. Not sure if this is important enough to care about. [1] http://dlang.org/const-faq.html#invariant
The assumption is that the memory comes from that allocator. -- Andrei
Jul 12 2016
parent Jacob Carlborg <doob me.com> writes:
On 2016-07-12 20:48, Andrei Alexandrescu wrote:

 The assumption is that the memory comes from that allocator. -- Andrei
Right, didn't think of that. -- /Jacob Carlborg
Jul 13 2016
prev sibling parent reply Lodovico Giaretta <lodovico giaretart.net> writes:
On Tuesday, 12 July 2016 at 18:45:00 UTC, Jacob Carlborg wrote:
 On 2016-07-12 07:33, Andrei Alexandrescu wrote:

 The solution (very ingenious, due to dicebot) in fact does not 
 quite
 cast immutability away. Starting from a possibly immutable 
 pointer, it
 subtracts an offset from it. At that point the memory is not 
 tracked by
 the type system, but known to the allocator to contain metadata
 associated with the pointer that had been allocated with it. 
 After the
 subtraction, the cast exposes the data which is mutable without
 violating the immutability of the object proper. As I said, 
 it's quite
 an ingenious solution.
What if the immutable data is stored in ROM [1]? I assume it's not possible to have an offset to a completely different memory storage. Not sure if this is important enough to care about. [1] http://dlang.org/const-faq.html#invariant
The idea is that in general, you cannot cast away immutability because of this. But the allocator knows that the data at (ptr-offset) is not in ROM (because it didn't allocate it from ROM), and thus can cast away immutability without any issue. And the allocator is not violating any type system invariant, because there cannot exist any immutable reference to (ptr-offset), given that this area was not given away by the allocator.
Jul 12 2016
parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 07/12/2016 02:53 PM, Lodovico Giaretta wrote:
 On Tuesday, 12 July 2016 at 18:45:00 UTC, Jacob Carlborg wrote:
 On 2016-07-12 07:33, Andrei Alexandrescu wrote:

 The solution (very ingenious, due to dicebot) in fact does not quite
 cast immutability away. Starting from a possibly immutable pointer, it
 subtracts an offset from it. At that point the memory is not tracked by
 the type system, but known to the allocator to contain metadata
 associated with the pointer that had been allocated with it. After the
 subtraction, the cast exposes the data which is mutable without
 violating the immutability of the object proper. As I said, it's quite
 an ingenious solution.
What if the immutable data is stored in ROM [1]? I assume it's not possible to have an offset to a completely different memory storage. Not sure if this is important enough to care about. [1] http://dlang.org/const-faq.html#invariant
The idea is that in general, you cannot cast away immutability because of this. But the allocator knows that the data at (ptr-offset) is not in ROM (because it didn't allocate it from ROM), and thus can cast away immutability without any issue. And the allocator is not violating any type system invariant, because there cannot exist any immutable reference to (ptr-offset), given that this area was not given away by the allocator.
Thanks for the crisp summary. -- Andrei
Jul 12 2016
prev sibling next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 7/11/2016 10:15 PM, Shachar Shemesh wrote:
 D says any such cast is UB.
That's why such casts are not allowed in safe code. There's also no way to write a storage allocator in safe code. Code that is not checkably safe is needed in real world programming. The difference between D and C++ here is that D provides a means of marking such code as unsafe so the rest can be checkably safe, and C++ does not.
Jul 11 2016
parent reply John Colvin <john.loughran.colvin gmail.com> writes:
On Tuesday, 12 July 2016 at 05:37:54 UTC, Walter Bright wrote:
 On 7/11/2016 10:15 PM, Shachar Shemesh wrote:
 D says any such cast is UB.
That's why such casts are not allowed in safe code. There's also no way to write a storage allocator in safe code. Code that is not checkably safe is needed in real world programming. The difference between D and C++ here is that D provides a means of marking such code as unsafe so the rest can be checkably safe, and C++ does not.
Code that *could* cause undefined behaviour given certain inputs is unsafe code. Can be ok if you're careful. Code that *does* do undefined behaviour isn't just unsafe, it's undefined. Never OK. If casting away immutability is undefined behaviour in D, then all paths of execution* that do it are undefined. For the previous statement to be false, you must define cases where casting away immutability *is* defined.
Jul 12 2016
next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 7/12/2016 2:40 AM, John Colvin wrote:
 For the previous statement to be false, you must define cases where
 casting away immutability *is* defined.
system programming is, by definition, operating outside of the language guarantees of what happens. It's up to you, the systems programmer, to know what you're doing there.
Jul 12 2016
parent reply John Colvin <john.loughran.colvin gmail.com> writes:
On Tuesday, 12 July 2016 at 10:19:04 UTC, Walter Bright wrote:
 On 7/12/2016 2:40 AM, John Colvin wrote:
 For the previous statement to be false, you must define cases 
 where
 casting away immutability *is* defined.
system programming is, by definition, operating outside of the language guarantees of what happens. It's up to you, the systems programmer, to know what you're doing there.
This is so, so wrong. There's a world of difference between "you have to get this right or you're in trouble" and "the compiler (and especially the optimiser) is free to assume that what you're doing never happens". Undefined behaviour, as used in practice by modern optimising compilers, is in the second camp. You might have a different definition, but it's not the one everyone else is using and not the one that our two fastest backends understand. Given the definition of undefined behaviour that everyone else understands, do you actually mean "modifying immutable data by any means is undefined behaviour" instead of "casting away immutable is undefined behaviour"?
Jul 12 2016
next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 7/12/2016 6:13 AM, John Colvin wrote:
 On Tuesday, 12 July 2016 at 10:19:04 UTC, Walter Bright wrote:
 On 7/12/2016 2:40 AM, John Colvin wrote:
 For the previous statement to be false, you must define cases where
 casting away immutability *is* defined.
system programming is, by definition, operating outside of the language guarantees of what happens. It's up to you, the systems programmer, to know what you're doing there.
This is so, so wrong. There's a world of difference between "you have to get this right or you're in trouble" and "the compiler (and especially the optimiser) is free to assume that what you're doing never happens". Undefined behaviour, as used in practice by modern optimising compilers, is in the second camp. You might have a different definition, but it's not the one everyone else is using and not the one that our two fastest backends understand. Given the definition of undefined behaviour that everyone else understands, do you actually mean "modifying immutable data by any means is undefined behaviour" instead of "casting away immutable is undefined behaviour"?
What's the difference? Anyhow, if you cast away immutability, and the data exists in rom, you still can't write to it (you'll get a seg fault). If it is in mutable memory, you can change it, but other threads may be caching or reading the value while you do that, i.e. synchronization issues. The optimizer may be taking advantage of immutability in its semantic transformations. As a systems programmer, you'd have to account for that.
Jul 12 2016
parent reply John Colvin <john.loughran.colvin gmail.com> writes:
On Wednesday, 13 July 2016 at 00:03:04 UTC, Walter Bright wrote:
 On 7/12/2016 6:13 AM, John Colvin wrote:
 On Tuesday, 12 July 2016 at 10:19:04 UTC, Walter Bright wrote:
 On 7/12/2016 2:40 AM, John Colvin wrote:
 For the previous statement to be false, you must define 
 cases where
 casting away immutability *is* defined.
system programming is, by definition, operating outside of the language guarantees of what happens. It's up to you, the systems programmer, to know what you're doing there.
This is so, so wrong. There's a world of difference between "you have to get this right or you're in trouble" and "the compiler (and especially the optimiser) is free to assume that what you're doing never happens". Undefined behaviour, as used in practice by modern optimising compilers, is in the second camp. You might have a different definition, but it's not the one everyone else is using and not the one that our two fastest backends understand. Given the definition of undefined behaviour that everyone else understands, do you actually mean "modifying immutable data by any means is undefined behaviour" instead of "casting away immutable is undefined behaviour"?
What's the difference?
"Casting away immutable is undefined behaviour": the following code has undefined results (note, not implementation defined, not if-you-know-what-you're-doing defined, undefined), despite not doing anything much: void foo() { immutable a = new int; auto b = cast(int*)a; } "modifying immutable data is undefined": The above code is fine, but the following is still undefined: void foo() { immutable a = new int; auto b = cast(int*)a; b = 3; }
 Anyhow, if you cast away immutability, and the data exists in 
 rom, you still can't write to it (you'll get a seg fault). If 
 it is in mutable memory, you can change it, but other threads 
 may be caching or reading the value while you do that, i.e. 
 synchronization issues. The optimizer may be taking advantage 
 of immutability in its semantic transformations.

 As a systems programmer, you'd have to account for that.
Something like "it might be in ROM" is an implementation detail. It's the optimiser part that forces a formal decision about what is undefined behaviour or not. Implementation defined behaviour is in the realm of the "systems programming, be careful you know what you're doing". Undefined behaviour is a different beast.
Jul 13 2016
next sibling parent reply Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= writes:
On Wednesday, 13 July 2016 at 09:19:29 UTC, John Colvin wrote:
 Something like "it might be in ROM" is an implementation 
 detail. It's the optimiser part that forces a formal decision 
 about what is undefined behaviour or not.
«Undefined» simply means that such code is not part of the specified language, as in, it is no longer the language covered. The optimizer is an implementation detail, the optimizer is not allowed to change the semantics of the language. If casting away immutable is claimed to be undefined behaviour it simply means that code that does this is not in the language and the compiler could refuse to compile such code if it was capable of detecting it. Or it _could_ specify it to have a specific type of semantics, but that would be a new language. Andrei seems to argue that casting away immutable is not undefined behaviour in general.
 Implementation defined behaviour is in the realm of the 
 "systems programming, be careful you know what you're doing". 
 Undefined behaviour is a different beast.
When something is «implementation specific» it means that the concrete compiler/hardware _must_ specify it. For instance, the max available memory is usually implementation specific.
Jul 13 2016
parent reply sarn <sarn theartofmachinery.com> writes:
On Wednesday, 13 July 2016 at 10:02:58 UTC, Ola Fosheim Grøstad 
wrote:
 «Undefined» simply means that such code is not part of the 
 specified language, as in, it is no longer the language 
 covered. The optimizer is an implementation detail, the 
 optimizer is not allowed to change the semantics of the 
 language.

 If casting away immutable is claimed to be undefined behaviour 
 it simply means that code that does this is not in the language 
 and the compiler could refuse to compile such code if it was 
 capable of detecting it. Or it _could_ specify it to have a 
 specific type of semantics, but that would be a new language.
You're confusing "undefined" with "implementation defined". Implementation-defined stuff is something that's not specified, but can be presumed to do *something*. Undefined stuff is something that's officially considered to not even make sense, so a compiler can assume it never happens (even though a programmer can make the mistake of letting it happen). This is sometimes controversial, but does let optimisers do some extra tricks with sane code.
Jul 13 2016
next sibling parent Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= writes:
On Wednesday, 13 July 2016 at 10:25:55 UTC, sarn wrote:
 On Wednesday, 13 July 2016 at 10:02:58 UTC, Ola Fosheim Grøstad 
 wrote:
 «Undefined» simply means that such code is not part of the 
 specified language, as in, it is no longer the language 
 covered. The optimizer is an implementation detail, the 
 optimizer is not allowed to change the semantics of the 
 language.

 If casting away immutable is claimed to be undefined behaviour 
 it simply means that code that does this is not in the 
 language and the compiler could refuse to compile such code if 
 it was capable of detecting it. Or it _could_ specify it to 
 have a specific type of semantics, but that would be a new 
 language.
You're confusing "undefined" with "implementation defined".
I am not confusing anything. A superset of a language is still covering the language, but it is also a new language. I think you are confusing "language" with "parsing".
 Implementation-defined stuff is something that's not specified, 
 but can be presumed to do *something*.  Undefined stuff is 
 something that's officially considered to not even make sense, 
 so a compiler can assume it never happens (even though a 
 programmer can make the mistake of letting it happen).  This is 
 sometimes controversial, but does let optimisers do some extra 
 tricks with sane code.
No. «Undefined» means exactly that, not defined by the language specification, not part of the language. It does not say anything about what should or should not happen. It is simply not covered by the spec and a compiler could be compliant even if it turned out rubbish for such code if the spec does not require the compiler to detect all valid programs in the language. It has nothing to do with optimisers, that's just an implementation detail. «Implementation defined» means that the implemented compiler/interpreter _must_ define it in a sensible manner, depending on the context, in order to comply with the spec.
Jul 13 2016
prev sibling parent reply Chris Wright <dhasenan gmail.com> writes:
On Wed, 13 Jul 2016 10:25:55 +0000, sarn wrote:
 Implementation-defined stuff is something that's not specified, but can
 be presumed to do *something*.  Undefined stuff is something that's
 officially considered to not even make sense
This comes from the C++ spec. We're following the same definitions. To wit: """ The semantic descriptions in this International Standard define a parameterized nondeterministic abstract machine. Certain aspects and operations of the abstract machine are described in this International Standard as implementation-defined (for example, sizeof (int)). These constitute the parameters of the abstract machine. Each implementation shall include documentation describing its characteristics and behavior in these respects. Certain other aspects and operations of the abstract machine are described in this International Standard as unspecified (for example, order of evaluation of arguments to a function). Where possible, this International Standard defines a set of allowable behaviors. These define the nondeterministic aspects of the abstract machine. Certain other operations are described in this International Standard as undefined (for example, the effect of dereferencing the null pointer). [ Note: this International Standard imposes no requirements on the behavior of programs that contain undefined behavior. —end note ] """ Implementation-defined stuff *is* specified. It's specified in the documentation of each compiler. Undefined stuff *can* make sense. I want to mutate a data structure marked immutable. It's obvious what I want to have happen, it's just not obvious what will actually happen.
Jul 13 2016
parent Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= writes:
On Wednesday, 13 July 2016 at 13:52:06 UTC, Chris Wright wrote:
 Undefined stuff *can* make sense. I want to mutate a data 
 structure marked immutable. It's obvious what I want to have 
 happen, it's just not obvious what will actually happen.
But the compiler is part of the abstract machine so it could simply refuse to compile a program that isn't valid, or it could define an extension that makes more programs valid. One usually use the these terms: «well-formed program»: a program that follows both the syntactical rules and the required statically-detected semantic rules. «valid program»: a program that is well-formed and that also does not lead to semantic errors that are required to be detected at least at runtime (or in the case of undefined behaviour; errors that are not required to be detected for performance reasons). Since C/C++ is aiming at avoiding semantic run-time checks the standard go with undefined behaviour instead. But there are compilers that do more than that.
Jul 13 2016
prev sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 7/13/16 5:19 AM, John Colvin wrote:
 "Casting away immutable is undefined behaviour": the following code has
 undefined results (note, not implementation defined, not
 if-you-know-what-you're-doing defined, undefined), despite not doing
 anything much:

 void foo()
 {
     immutable a = new int;
     auto b = cast(int*)a;
 }

 "modifying immutable data is undefined": The above code is fine, but the
 following is still undefined:

 void foo()
 {
     immutable a = new int;
     auto b = cast(int*)a;
     b = 3;
 }
Interesting distinction. We must render the latter undefined but not the former. Consider: struct S { immutable int a; int b; } S s; immutable int* p = &s.a; It may be the case that you need to get to s.b (and change it) when all you have is p, which is a pointer to s.a. This is essentially what makes AffixAllocator work. Andrei
Jul 13 2016
next sibling parent reply John Colvin <john.loughran.colvin gmail.com> writes:
On Wednesday, 13 July 2016 at 11:28:11 UTC, Andrei Alexandrescu 
wrote:
 On 7/13/16 5:19 AM, John Colvin wrote:
 "Casting away immutable is undefined behaviour": the following 
 code has
 undefined results (note, not implementation defined, not
 if-you-know-what-you're-doing defined, undefined), despite not 
 doing
 anything much:

 void foo()
 {
     immutable a = new int;
     auto b = cast(int*)a;
 }

 "modifying immutable data is undefined": The above code is 
 fine, but the
 following is still undefined:

 void foo()
 {
     immutable a = new int;
     auto b = cast(int*)a;
     b = 3;
 }
Interesting distinction. We must render the latter undefined but not the former. Consider: struct S { immutable int a; int b; } S s; immutable int* p = &s.a; It may be the case that you need to get to s.b (and change it) when all you have is p, which is a pointer to s.a. This is essentially what makes AffixAllocator work. Andrei
Hmm. You have to create a mutable reference to immutable data to do that (although you are casting away immutable). Let's consider this: *(p + 1) = 3; it either has to be written like this: *((cast(int*)p) + 1) = 3; or like this: *(cast(int*)(p + 1)) = 3; The first is creating a mutable pointer to immutable data, the second creates an immutable pointer to mutable data. I'm not sure which is worse, considering that those reference could sit around for ages and be passed around etc., e.g. auto tmp = p + 1; // ... do loads of stuff, possibly reading from tmp *(cast(int*)tmp) = 3; seems like we would end up in trouble (either of our own creation or via the optimiser) from thinking tmp actually pointed to immutable data. Probably worse than a mutable reference to immutable data, as long as you didn't write to it. Pointer arithmetic in objects is really quite dangerous w.r.t. immutability/const.
Jul 13 2016
next sibling parent Lodovico Giaretta <lodovico giaretart.net> writes:
On Wednesday, 13 July 2016 at 11:48:15 UTC, John Colvin wrote:
 Hmm. You have to create a mutable reference to immutable data 
 to do that (although you are casting away immutable). Let's 
 consider this:

 *(p + 1) = 3;

 it either has to be written like this:

 *((cast(int*)p) + 1) = 3;

 or like this:

 *(cast(int*)(p + 1)) = 3;

 The first is creating a mutable pointer to immutable data, the 
 second creates an immutable pointer to mutable data. I'm not 
 sure which is worse, considering that those reference could sit 
 around for ages and be passed around etc., e.g.

 auto tmp = p + 1;
 // ... do loads of stuff, possibly reading from tmp
 *(cast(int*)tmp) = 3;

 seems like we would end up in trouble (either of our own 
 creation or via the optimiser) from thinking tmp actually 
 pointed to immutable data. Probably worse than a mutable 
 reference to immutable data, as long as you didn't write to it.

 Pointer arithmetic in objects is really quite dangerous w.r.t. 
 immutability/const.
immutable int* p = ... auto cp = cast(const int*)p; // cast immutable* to const* auto cq = p + 1 // shift const* from immutable data to mutable data auto q = cast(int*) cq; // cast const* to mutable* *q = 3; This way both temporaries are const, and can point to both mutable and immutable. So you never use immutable pointers to mutable data nor mutable pointers to immutable data.
Jul 13 2016
prev sibling next sibling parent John Colvin <john.loughran.colvin gmail.com> writes:
On Wednesday, 13 July 2016 at 11:48:15 UTC, John Colvin wrote:
 On Wednesday, 13 July 2016 at 11:28:11 UTC, Andrei Alexandrescu 
 wrote:
 On 7/13/16 5:19 AM, John Colvin wrote:
 "Casting away immutable is undefined behaviour": the 
 following code has
 undefined results (note, not implementation defined, not
 if-you-know-what-you're-doing defined, undefined), despite 
 not doing
 anything much:

 void foo()
 {
     immutable a = new int;
     auto b = cast(int*)a;
 }

 "modifying immutable data is undefined": The above code is 
 fine, but the
 following is still undefined:

 void foo()
 {
     immutable a = new int;
     auto b = cast(int*)a;
     b = 3;
 }
Interesting distinction. We must render the latter undefined but not the former. Consider: struct S { immutable int a; int b; } S s; immutable int* p = &s.a; It may be the case that you need to get to s.b (and change it) when all you have is p, which is a pointer to s.a. This is essentially what makes AffixAllocator work. Andrei
Hmm. You have to create a mutable reference to immutable data to do that (although you are casting away immutable).
Woops, I meant "You don't have to create".
Jul 13 2016
prev sibling parent Walter Bright <newshound2 digitalmars.com> writes:
On 7/13/2016 4:48 AM, John Colvin wrote:
 Pointer arithmetic in objects is really quite dangerous w.r.t.
immutability/const.
Right, and one reason why pointer arithmetic isn't allowed in safe code.
Jul 14 2016
prev sibling next sibling parent reply ag0aep6g <anonymous example.com> writes:
On 07/13/2016 01:28 PM, Andrei Alexandrescu wrote:
 struct S { immutable int a; int b; }
 S s;
 immutable int* p = &s.a;

 It may be the case that you need to get to s.b (and change it) when all
 you have is p, which is a pointer to s.a. This is essentially what makes
 AffixAllocator work.
The obvious way doesn't seem so bad in isolation: ---- void main() { S s; immutable int* p = &s.a; S* ps = cast(S*) p; ps.b = 42; } ---- But when p comes from a pure function things get iffy: ---- struct S { immutable int a; int b; } immutable(int*) f() pure { S* s = new S; return &s.a; } void main() { immutable int* p1 = f(); S* ps1 = cast(S*) p1; ps1.b = 42; immutable int* p2 = f(); S* ps2 = cast(S*) p2; ps2.b = 43; } ---- f is marked pure, has no parameters and no mutable indirections in the return type. So f is strongly pure. That means, the compiler is free to reuse p1 for p2. Or it may allocate two distinct structures, of course. So, ps1 may or may not be the same as ps2, if those casts are allowed. That can't be right.
Jul 13 2016
parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 07/13/2016 08:15 AM, ag0aep6g wrote:
 ----
 struct S { immutable int a; int b; }

 immutable(int*) f() pure
 {
      S* s = new S;
      return &s.a;
 }

 void main()
 {
      immutable int* p1 = f();
      S* ps1 = cast(S*) p1;
      ps1.b = 42;

      immutable int* p2 = f();
      S* ps2 = cast(S*) p2;
      ps2.b = 43;
 }
 ----

 f is marked pure, has no parameters and no mutable indirections in the
 return type. So f is strongly pure. That means, the compiler is free to
 reuse p1 for p2. Or it may allocate two distinct structures, of course.

 So, ps1 may or may not be the same as ps2, if those casts are allowed.
 That can't be right.
Good example. I think the two pointers should be allowed to be equal. -- Andrei
Jul 13 2016
prev sibling parent Shachar Shemesh <shachar weka.io> writes:
On 13/07/16 14:28, Andrei Alexandrescu wrote:
 Interesting distinction. We must render the latter undefined but not the
 former. Consider:
Here's the definition I'm proposing: It is undefined behavior to cast away immutable, const or shared modifiers and reference the memory if any other part of the program accesses that memory with the modifiers intact. Examples of behavior that is not undefined: Affix allocator: Accesses memory that is always accessed as mutable Intrusive reference counting inside a struct: same deal Only casting a pointer: no access Shachar
Jul 13 2016
prev sibling parent Kagamin <spam here.lot> writes:
On Tuesday, 12 July 2016 at 13:13:42 UTC, John Colvin wrote:
 This is so, so wrong. There's a world of difference between 
 "you have to get this right or you're in trouble" and "the 
 compiler (and especially the optimiser) is free to assume that 
 what you're doing never happens".
I'd say the compiler (and especially the optimiser) should assume that mutable and immutable data don't overlap as if casting never happens. Not sure if this means gcc-style UB.
Jul 13 2016
prev sibling parent reply Johan Engelen <j j.nl> writes:
On Tuesday, 12 July 2016 at 09:40:09 UTC, John Colvin wrote:
 On Tuesday, 12 July 2016 at 05:37:54 UTC, Walter Bright wrote:
 On 7/11/2016 10:15 PM, Shachar Shemesh wrote:
 D says any such cast is UB.
That's why such casts are not allowed in safe code. There's also no way to write a storage allocator in safe code. Code that is not checkably safe is needed in real world programming. The difference between D and C++ here is that D provides a means of marking such code as unsafe so the rest can be checkably safe, and C++ does not.
Code that *could* cause undefined behaviour given certain inputs is unsafe code. Can be ok if you're careful. Code that *does* do undefined behaviour isn't just unsafe, it's undefined. Never OK. If casting away immutability is undefined behaviour in D, then all paths of execution* that do it are undefined. For the previous statement to be false, you must define cases where casting away immutability *is* defined.
Strongly agree. With `synchronize` we already have a problematic case of casting away immutability (dmdfe internally) where an optimizing compiler generates bad code. Let's not add more.
Jul 12 2016
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 7/12/2016 3:31 AM, Johan Engelen wrote:
 With `synchronize` we already have a problematic case of casting away
 immutability (dmdfe internally) where an optimizing compiler generates
 bad code. Let's not add more.
Is there a bugzilla issue for this?
Jul 12 2016
parent reply Johan Engelen <j j.nl> writes:
On Tuesday, 12 July 2016 at 11:02:04 UTC, Walter Bright wrote:
 On 7/12/2016 3:31 AM, Johan Engelen wrote:
 With `synchronize` we already have a problematic case of 
 casting away
 immutability (dmdfe internally) where an optimizing compiler 
 generates
 bad code. Let's not add more.
Is there a bugzilla issue for this?
Of course there is. And has been for more than a year. https://issues.dlang.org/show_bug.cgi?id=14251
Jul 12 2016
parent Walter Bright <newshound2 digitalmars.com> writes:
On 7/12/2016 4:45 AM, Johan Engelen wrote:
 https://issues.dlang.org/show_bug.cgi?id=14251
Thank you.
Jul 12 2016
prev sibling parent reply Jesse Phillips <Jesse.K.Phillips+D gmail.com> writes:
On Tuesday, 12 July 2016 at 05:15:09 UTC, Shachar Shemesh wrote:
 C++ fully defines when it is okay to cast away constness, gives 
 you aids so that you know that that's what you are doing, and 
 nothing else, and gives you a method by which you can do it 
 without a cast if the circumstances support it.

 D says any such cast is UB.

 Shachar
Yeah C++ defines how you can modify const data after saying you can never modify data from a const qualified access path. §7.1.​6.1/3[1] I still haven't found someone who can explain how C++ can define the behavior of modifying a variable after casting away const. Sure it says that if the original object was mutable (not stored in ROM) than you can modify it, but that is true of D as well, but the language doesn't know the object is not stored in ROM so it can't tell you what it will do when you try to modify it, only you can. 1. http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2014/n4296.pdf
Jul 14 2016
next sibling parent reply Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= writes:
On Thursday, 14 July 2016 at 16:17:19 UTC, Jesse Phillips wrote:
 I still haven't found someone who can explain how C++ can 
 define the behavior of modifying a variable after casting away 
 const.
C++ is locked down in a mine-field of backward compatibility issues and a need to interface with C verbatim (directly including C header files where const parameters might lack the const modifier). D does not work with C header files and can redefine the interfaces to fit D semantics in C bindings...
Jul 14 2016
parent reply Jesse Phillips <Jesse.K.Phillips+D gmail.com> writes:
On Thursday, 14 July 2016 at 16:47:20 UTC, Ola Fosheim Grøstad 
wrote:
 On Thursday, 14 July 2016 at 16:17:19 UTC, Jesse Phillips wrote:
 I still haven't found someone who can explain how C++ can 
 define the behavior of modifying a variable after casting away 
 const.
C++ is locked down in a mine-field of backward compatibility issues and a need to interface with C verbatim (directly including C header files where const parameters might lack the const modifier). D does not work with C header files and can redefine the interfaces to fit D semantics in C bindings...
That doesn't explain how you can define the behavior: void foo(int const* p) { *(const_cast<int*>(p)) = 3; } Does 'p' get modified or is the program going to crash or something else? Please define it for me. C++ says: You can't modify the location pointed to by 'p' from 'p', using const_cast on 'p' you'll either get undefined behavior or it will modify the location 'p' points to. So it is defined to either be undefined or modify the location 'p' refers to. The language isn't able to tell you what will happen so how can it define the behavior?
Jul 14 2016
parent reply Steven Schveighoffer <schveiguy yahoo.com> writes:
On 7/14/16 1:46 PM, Jesse Phillips wrote:
 On Thursday, 14 July 2016 at 16:47:20 UTC, Ola Fosheim Grøstad wrote:
 On Thursday, 14 July 2016 at 16:17:19 UTC, Jesse Phillips wrote:
 I still haven't found someone who can explain how C++ can define the
 behavior of modifying a variable after casting away const.
C++ is locked down in a mine-field of backward compatibility issues and a need to interface with C verbatim (directly including C header files where const parameters might lack the const modifier). D does not work with C header files and can redefine the interfaces to fit D semantics in C bindings...
That doesn't explain how you can define the behavior: void foo(int const* p) { *(const_cast<int*>(p)) = 3; } Does 'p' get modified or is the program going to crash or something else? Please define it for me. C++ says: You can't modify the location pointed to by 'p' from 'p', using const_cast on 'p' you'll either get undefined behavior or it will modify the location 'p' points to. So it is defined to either be undefined or modify the location 'p' refers to. The language isn't able to tell you what will happen so how can it define the behavior?
That section means the compiler will stop you from doing it unless you cast it away :) That is: *p = 3; is a compiler error. That's all the note is saying. What defining the behavior means is that the compiler has to take into account that a variable can change even though all available accesses to it are const. For example: int x = 5; foo(&x); int y = x; If what you wrote is UB (as it is in D), then the compiler can go ahead and assign 5 to y. In C++, the compiler has to reload x, because it may have changed. Someone explained this to me recently on the NG. -Steve
Jul 14 2016
next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 7/14/2016 11:49 AM, Steven Schveighoffer wrote:
 In C++, the compiler has to reload x, because it may have changed.
That's right. I learned that the hard way, when the original optimizer would assume that x hadn't changed. It broke a surprising amount of code. It also means that the utility of const in C++ is extremely limited.
Jul 14 2016
parent reply Andrew Godfrey <X y.com> writes:
On Thursday, 14 July 2016 at 20:01:54 UTC, Walter Bright wrote:
 On 7/14/2016 11:49 AM, Steven Schveighoffer wrote:
 In C++, the compiler has to reload x, because it may have 
 changed.
That's right. I learned that the hard way, when the original optimizer would assume that x hadn't changed. It broke a surprising amount of code. It also means that the utility of const in C++ is extremely limited.
Walter, I hope you were just in a rush. Because I think you meant to say, "the utility of const in C++ for *optimizing code* is extremely limited". If you really think that the optimizer is the primary audience for language features, then ... well that would surprise me given D's design, which generally seems quite mindful of "humans are the primary audience". Though at times I do feel people use "auto" when they should state the type they expect (because then the compiler could help detect changes which break intent, that might otherwise compile just fine).
Jul 18 2016
parent Walter Bright <newshound2 digitalmars.com> writes:
On 7/18/2016 9:08 AM, Andrew Godfrey wrote:
 On Thursday, 14 July 2016 at 20:01:54 UTC, Walter Bright wrote:
 On 7/14/2016 11:49 AM, Steven Schveighoffer wrote:
 In C++, the compiler has to reload x, because it may have changed.
That's right. I learned that the hard way, when the original optimizer would assume that x hadn't changed. It broke a surprising amount of code. It also means that the utility of const in C++ is extremely limited.
Walter, I hope you were just in a rush. Because I think you meant to say, "the utility of const in C++ for *optimizing code* is extremely limited".
No. I meant const's utility to provide checkable, reliable information about code. I'm a big believer in encapsulation, and const is a major tool for that. But C++ const just isn't very helpful.
Jul 18 2016
prev sibling parent reply Jesse Phillips <Jesse.K.Phillips+D gmail.com> writes:
On Thursday, 14 July 2016 at 18:49:36 UTC, Steven Schveighoffer 
wrote:
 If what you wrote is UB (as it is in D), then the compiler can 
 go ahead and assign 5 to y.

 In C++, the compiler has to reload x, because it may have 
 changed.

 Someone explained this to me recently on the NG.

 -Steve
Thanks, so when people say "C++ defines the behavior of modifying const" what they really mean is "C++ defines const as meaningless."
Jul 14 2016
parent reply Shachar Shemesh <shachar weka.io> writes:
On 15/07/16 02:06, Jesse Phillips wrote:
 On Thursday, 14 July 2016 at 18:49:36 UTC, Steven Schveighoffer wrote:
 If what you wrote is UB (as it is in D), then the compiler can go
 ahead and assign 5 to y.

 In C++, the compiler has to reload x, because it may have changed.

 Someone explained this to me recently on the NG.

 -Steve
Thanks, so when people say "C++ defines the behavior of modifying const" what they really mean is "C++ defines const as meaningless."
Const is very far from meaningless in C++. It is an extremely valuable tool in turning bugs into compile time errors. That is not something to think lightly of (and, sadly, not something D does very well) In terms of optimizations, there are, indeed, cases where, had const not been removable, things could be optimized more. I don't think D has a right to complain about C++ in that regard, however. Also, see http://stackoverflow.com/questions/25029516/c-reliance-on-argument-to-const-reference-not-changing Shachar
Jul 15 2016
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 7/15/2016 2:34 AM, Shachar Shemesh wrote:
 Const is very far from meaningless in C++. It is an extremely valuable tool in
 turning bugs into compile time errors. That is not something to think lightly
of
Unfortunately, C++ const is little more than advisory: 1. no protection against casting away const and changing it anyway 2. no protection against adding 'mutable' members and changing it anyway 3. only good for one level, no way to specify a data structure of generic type T as const
 (and, sadly, not something D does very well)
Explain. D fixes C++ faults 1..3.
 In terms of optimizations, there are, indeed, cases where, had const not been
 removable, things could be optimized more. I don't think D has a right to
 complain about C++ in that regard, however.
Of course D does. I had to disable const optimizations in my C++ compiler, which is one of the motivations for the way const works in D.
Jul 15 2016
parent reply Shachar Shemesh <shachar weka.io> writes:
On 15/07/16 13:13, Walter Bright wrote:

 1. no protection against casting away const and changing it anyway
 2. no protection against adding 'mutable' members and changing it anyway
 3. only good for one level, no way to specify a data structure of
 generic type T as const

 (and, sadly, not something D does very well)
Explain. D fixes C++ faults 1..3.
Yes, it does. And the result is that const is well defined, safe, and completely impractical to turn on. There are many many places I'd have the compiler enforce const correctness in C++, where in D I just gave up. In one of those places we even went as far as to add run time checks that no one inadvertently changed a buffer. there are many scenarios in which I could put const in C++, and I simply can't in D, because something somewhere needs to be mutable. in D (though, at least if my suggestion is accepted, without throwing away the optimizations it allows).
 In terms of optimizations, there are, indeed, cases where, had const
 not been
 removable, things could be optimized more. I don't think D has a right to
 complain about C++ in that regard, however.
Of course D does. I had to disable const optimizations in my C++ compiler, which is one of the motivations for the way const works in D.
For const, yes. In almost every other aspect of the language, however, D favors safety over performance. Just look at range checks, memory allocation, default values, and those are just the examples off the top of my head. I'm not saying that as a bad thing about D. It is a perfectly valid and reasonable trade off to make. I'm just saying D has no right to criticize C++ for missed optimizations. People who live in glass houses should not throw stones. Shachar
Jul 15 2016
next sibling parent reply Patrick Schluter <Patrick.Schluter bbox.fr> writes:
On Friday, 15 July 2016 at 10:25:16 UTC, Shachar Shemesh wrote:

 It means there are many scenarios in which I could put const in 
 C++, and I simply can't in D, because something somewhere needs 
 to be mutable.
Then it is not const and marking it as const is a bug. D enforces to not write a bug, what's wrong with that?
Jul 15 2016
parent reply Andrew Godfrey <X y.com> writes:
On Friday, 15 July 2016 at 11:09:24 UTC, Patrick Schluter wrote:
 On Friday, 15 July 2016 at 10:25:16 UTC, Shachar Shemesh wrote:

 It means there are many scenarios in which I could put const 
 in C++, and I simply can't in D, because something somewhere 
 needs to be mutable.
Then it is not const and marking it as const is a bug. D enforces to not write a bug, what's wrong with that?
One example is if you make a class that has an internal cache of something. Updating or invalidating that cache has no logical effect on the externally-observable state of the class. So you should be able to modify the cache even on a 'const' object. This is not a bug and I've seen it have a huge effect on performance - probably a lot more than the const optimizations Walter is talking about here.
Jul 15 2016
next sibling parent reply Dicebot <public dicebot.lv> writes:
On 07/15/2016 05:43 PM, Andrew Godfrey wrote:
 On Friday, 15 July 2016 at 11:09:24 UTC, Patrick Schluter wrote:
 On Friday, 15 July 2016 at 10:25:16 UTC, Shachar Shemesh wrote:

 means there are many scenarios in which I could put const in C++, and
 I simply can't in D, because something somewhere needs to be mutable.
Then it is not const and marking it as const is a bug. D enforces to not write a bug, what's wrong with that?
One example is if you make a class that has an internal cache of something. Updating or invalidating that cache has no logical effect on the externally-observable state of the class. So you should be able to modify the cache even on a 'const' object. This is not a bug and I've seen it have a huge effect on performance - probably a lot more than the const optimizations Walter is talking about here.
Yes and the fact that D prohibits this incredibly common C++ design anti-pattern makes me very grateful about such choice. Logical const is terrible - either don't mark such objects as const or make cache separate.
Jul 15 2016
parent Mike Parker <aldacron gmail.com> writes:
On Friday, 15 July 2016 at 15:35:37 UTC, Dicebot wrote:

 One example is if you make a class that has an internal cache 
 of something. Updating or invalidating that cache has no 
 logical effect on the externally-observable state of the 
 class. So you should be able to modify the cache even on a 
 'const' object. This is not a bug and I've seen it have a huge 
 effect on performance - probably a lot more than the const 
 optimizations Walter is talking about here.
Yes and the fact that D prohibits this incredibly common C++ design anti-pattern makes me very grateful about such choice. Logical const is terrible - either don't mark such objects as const or make cache separate.
+1 Use an interface that prevents external modifications, e.g. getters, but no setters.
Jul 15 2016
prev sibling next sibling parent deadalnix <deadalnix gmail.com> writes:
On Friday, 15 July 2016 at 14:43:35 UTC, Andrew Godfrey wrote:
 On Friday, 15 July 2016 at 11:09:24 UTC, Patrick Schluter wrote:
 On Friday, 15 July 2016 at 10:25:16 UTC, Shachar Shemesh wrote:

 It means there are many scenarios in which I could put const 
 in C++, and I simply can't in D, because something somewhere 
 needs to be mutable.
Then it is not const and marking it as const is a bug. D enforces to not write a bug, what's wrong with that?
One example is if you make a class that has an internal cache of something. Updating or invalidating that cache has no logical effect on the externally-observable state of the class. So you should be able to modify the cache even on a 'const' object. This is not a bug and I've seen it have a huge effect on performance - probably a lot more than the const optimizations Walter is talking about here.
That's actually not true. Memory barrier needs to be emitted, and considered in the caller code.
Jul 15 2016
prev sibling next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 7/15/2016 7:43 AM, Andrew Godfrey wrote:
 One example is if you make a class that has an internal cache of something.
 Updating or invalidating that cache has no logical effect on the
 externally-observable state of the class. So you should be able to modify the
 cache even on a 'const' object.
Yes, that's the "logical const" argument. The trouble with it is there's no way for the compiler to detect that's what you're doing, nor can it do any checks on it. In effect, C++ const becomes little more than a documentation suggestion.
 This is not a bug and I've seen it have a huge
 effect on performance - probably a lot more than the const optimizations Walter
 is talking about here.
You can do logical const in D just like in C++, and get those performance gains. You just can't call it "const". But you can call it /*logical_const*/ and get the same result.
Jul 15 2016
parent reply Shachar Shemesh <shachar weka.io> writes:
On 15/07/16 22:50, Walter Bright wrote:

 You can do logical const in D just like in C++, and get those
 performance gains. You just can't call it "const". But you can call it
 /*logical_const*/ and get the same result.
No, you can't. The fact that the compiler enforces the no const to mutable transition (unless you use a cast) is one of the main appeals of using const in any language. If you call something "logical const", but the compiler does not help you to catch bugs, then I don't see the point. In effect, if logical const is what you want, C++ gives you a tool while D leaves you to your own devices. As a result, a lot of places you'd define as const in C++ are defined mutable in D, losing language expressiveness. Shachar
Jul 15 2016
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 7/15/2016 1:48 PM, Shachar Shemesh wrote:
 On 15/07/16 22:50, Walter Bright wrote:

 You can do logical const in D just like in C++, and get those
 performance gains. You just can't call it "const". But you can call it
 /*logical_const*/ and get the same result.
No, you can't. The fact that the compiler enforces the no const to mutable transition (unless you use a cast)
The compiler does next to nothing - the maintainer can stick in 'mutable' members, and there's no reliable way to detect that. The maintainer can stick in const removing casts, and there's no reliable way to detect that, either. If it's not mechanically checkable, it is not reliable, and is what I call "faith-based programming."
 is one of the main appeals of using const in
 any language. If you call something "logical const", but the compiler does not
 help you to catch bugs, then I don't see the point.
I agree, and the C++ compiler is unable to verify "logical const". It's entirely based on faith.
 In effect, if logical const is what you want, C++ gives you a tool while D
 leaves you to your own devices. As a result, a lot of places you'd define as
 const in C++ are defined mutable in D, losing language expressiveness.
You and I certainly have polar opposite opinions on that. C++ does not have a notion of "logical const" (it is not in the C++ Standard). It's an uncheckable convention, might as well just use /*logical const*/. D, on the other hand, has verifiable const and verifiable purity.
Jul 15 2016
parent reply Andrew Godfrey <X y.com> writes:
On Friday, 15 July 2016 at 23:00:45 UTC, Walter Bright wrote:
 On 7/15/2016 1:48 PM, Shachar Shemesh wrote:
 On 15/07/16 22:50, Walter Bright wrote:

 You can do logical const in D just like in C++, and get those
 performance gains. You just can't call it "const". But you 
 can call it
 /*logical_const*/ and get the same result.
No, you can't. The fact that the compiler enforces the no const to mutable transition (unless you use a cast)
The compiler does next to nothing - the maintainer can stick in 'mutable' members, and there's no reliable way to detect that. The maintainer can stick in const removing casts, and there's no reliable way to detect that, either. If it's not mechanically checkable, it is not reliable, and is what I call "faith-based programming."
 is one of the main appeals of using const in
 any language. If you call something "logical const", but the 
 compiler does not
 help you to catch bugs, then I don't see the point.
I agree, and the C++ compiler is unable to verify "logical const". It's entirely based on faith.
 In effect, if logical const is what you want, C++ gives you a 
 tool while D
 leaves you to your own devices. As a result, a lot of places 
 you'd define as
 const in C++ are defined mutable in D, losing language 
 expressiveness.
You and I certainly have polar opposite opinions on that. C++ does not have a notion of "logical const" (it is not in the C++ Standard). It's an uncheckable convention, might as well just use /*logical const*/. D, on the other hand, has verifiable const and verifiable purity.
D's const/immutable feature is powerful and I love it. I would not trade it for C++'s version of const. It also seems fair to say that const as C++ implements it, would not be worth adding to D even if having two very similar features wasn't confusing. After all, every feature starts with a negative score. This subthread took it too far, that's the only reason I waded in. C++'s const feature is not entirely useless. Similarly:
 If it's not mechanically checkable, it is not reliable,
I agree and I like mechanically checkable things. But I also like compiler features that mix mechanical checking with the ability to attest to something that can't be mechanically checked. Like the system attribute. So this line of reasoning feels incomplete to me. Are we talking here about immutable/const only within the context of safe code? If so, then I missed that but I get it. Otherwise, I don't get it.
Jul 15 2016
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 7/15/2016 8:25 PM, Andrew Godfrey wrote:
 I agree and I like mechanically checkable things. But I also like compiler
 features that mix mechanical checking with the ability to attest to something
 that can't be mechanically checked. Like the  system attribute. So this line of
 reasoning feels incomplete to me. Are we talking here about immutable/const
only
 within the context of  safe code? If so, then I missed that but I get it.
Since casting away immutable/const is allowed in system code, yes, I am referring to safe code here.
Jul 15 2016
next sibling parent reply Andrew Godfrey <X y.com> writes:
On Saturday, 16 July 2016 at 04:24:39 UTC, Walter Bright wrote:
 On 7/15/2016 8:25 PM, Andrew Godfrey wrote:
 I agree and I like mechanically checkable things. But I also 
 like compiler
 features that mix mechanical checking with the ability to 
 attest to something
 that can't be mechanically checked. Like the  system 
 attribute. So this line of
 reasoning feels incomplete to me. Are we talking here about 
 immutable/const only
 within the context of  safe code? If so, then I missed that 
 but I get it.
Since casting away immutable/const is allowed in system code, yes, I am referring to safe code here.
Ok. Well, when you and Shachar were arguing, it still doesn't seem like Shachar was talking about safe code specifically. I can't wrap my mind around wanting a "logical const" feature usable in safe context; you could already use system for those cases.
Jul 15 2016
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 7/15/2016 11:04 PM, Andrew Godfrey wrote:
 Ok. Well, when you and Shachar were arguing, it still doesn't seem like Shachar
 was talking about  safe code specifically. I can't wrap my mind around wanting
a
 "logical const" feature usable in  safe context; you could already use  system
 for those cases.
system provides a way around the type system, and offers fewer guarantees, sort of "use at your own risk". But use of system is checkable, and when used properly only a small percentage of the code should be in system functions. But in C++, everything is system. I'm not sure how people successfully create enormous programs with it. I do know that the makers of add-on checkers like Coverty make bank. I once told a Coverity salesman that the purpose of D was to put Coverity (and its competitors) out of business :-) I saw him again a couple years later and he remembered me and that line!
Jul 15 2016
parent reply Andrew Godfrey <X y.com> writes:
On Saturday, 16 July 2016 at 06:40:31 UTC, Walter Bright wrote:

 But in C++, everything is  system. I'm not sure how people 
 successfully create enormous programs with it.
I work on Microsoft Word. I'm not sure how much I can share about internal verification tools, but I can say: We do have SAL annotation: https://msdn.microsoft.com/en-us/library/ms235402.aspx As solutions go, SAL is dissatisfyingly incomplete, and not an easy mini-language to learn (I still haven't managed it, I look up what I need on the occasions that I need it). But it does impress at times with what it can catch. It goes a bit beyond memory safety, too, so I would guess that there are bug patterns it can catch that D currently won't. One class of bug I find interesting here is uninitialized variables. I'm not sure if Visual Studio helps here (we have an internal tool, I know some 3rd party tools do this too). But it's interesting that these tools can (often, not always) spot code paths where a variable doesn't get initialized. D's approach to this helps strongly to avoid using uninitialized memory, but in so doing, it discards the information these tools are using to spot such bugs. (So, the kind of bug D lets slip through here would tend to be one where variable foo's value is foo.init but it should have been initialized to some other value).
Jul 16 2016
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 7/16/2016 5:32 AM, Andrew Godfrey wrote:
 On Saturday, 16 July 2016 at 06:40:31 UTC, Walter Bright wrote:

 But in C++, everything is  system. I'm not sure how people successfully create
 enormous programs with it.
I work on Microsoft Word. I'm not sure how much I can share about internal verification tools, but I can say: We do have SAL annotation: https://msdn.microsoft.com/en-us/library/ms235402.aspx
Thanks for taking the time to post about your experience with it. Comparing D with SAL is a worthwhile exercise.
 As solutions go, SAL is dissatisfyingly incomplete, and not an easy
 mini-language to learn (I still haven't managed it, I look up what I need on
the
 occasions that I need it). But it does impress at times with what it can catch.
 It goes a bit beyond memory safety, too, so I would guess that there are bug
 patterns it can catch that D currently won't.
I've seen SAL before, but have not studied it. My impression is it is much more complex than necessary. For example, https://msdn.microsoft.com/en-us/library/hh916383.aspx describes annotations to memcpy(). I believe these are better handled by use of dynamic arrays and transitive const. But there's no doubt that careful use of SAL will reduce bugs.
 One class of bug I find interesting here is uninitialized variables. I'm not
 sure if Visual Studio helps here (we have an internal tool, I know some 3rd
 party tools do this too). But it's interesting that these tools can (often, not
 always) spot code paths where a variable doesn't get initialized. D's approach
 to this helps strongly to avoid using uninitialized memory, but in so doing, it
 discards the information these tools are using to spot such bugs. (So, the kind
 of bug D lets slip through here would tend to be one where variable foo's value
 is foo.init but it should have been initialized to some other value).
Uninitialized variables, along with their cousin adding a field to a struct and forgetting to initialize it in one of its constructors, have caused me endless problems and cost me untold hours. It was a major motivator to solve this problem in D, and I am pleased that my problems with it have been essentially eliminated. You write that SAL still leaves undetected cases of uninitialized variables. I think I'd rather live with the limitation you mentioned in the D way rather than risk uninitialized variables. Having a predictable wrong value in a variable is debuggable, having an unpredictable wrong value is often not debuggable, which is why they consume so much time.
Jul 16 2016
next sibling parent reply Andrew Godfrey <X y.com> writes:
On Saturday, 16 July 2016 at 21:52:02 UTC, Walter Bright wrote:
 On 7/16/2016 5:32 AM, Andrew Godfrey wrote:
 [...]
Thanks for taking the time to post about your experience with it. Comparing D with SAL is a worthwhile exercise.
 [...]
I've seen SAL before, but have not studied it. My impression is it is much more complex than necessary. For example, https://msdn.microsoft.com/en-us/library/hh916383.aspx describes annotations to memcpy(). I believe these are better handled by use of dynamic arrays and transitive const. But there's no doubt that careful use of SAL will reduce bugs.
 [...]
Uninitialized variables, along with their cousin adding a field to a struct and forgetting to initialize it in one of its constructors, have caused me endless problems and cost me untold hours. It was a major motivator to solve this problem in D, and I am pleased that my problems with it have been essentially eliminated. You write that SAL still leaves undetected cases of uninitialized variables. I think I'd rather live with the limitation you mentioned in the D way rather than risk uninitialized variables. Having a predictable wrong value in a variable is debuggable, having an unpredictable wrong value is often not debuggable, which is why they consume so much time.
I'm not trying to argue against D's design here. I'm thinking: 1) Static analysis tools still have relevance even in D code. 2) I wonder if an "uninitialized" feature would be worthwhile. That is, a value you can initialize a variable to, equal to 'init', but that static analyzers know you don't mean to ever use.
Jul 16 2016
next sibling parent reply pineapple <meapineapple gmail.com> writes:
On Sunday, 17 July 2016 at 02:03:52 UTC, Andrew Godfrey wrote:
 2) I wonder if an "uninitialized" feature would be worthwhile. 
 That is, a value you can initialize a variable to, equal to 
 'init', but that static analyzers know you don't mean to ever 
 use.
Don't we already have this in the form of int uninitialized_value = void; ?
Jul 16 2016
parent reply Andrew Godfrey <X y.com> writes:
On Sunday, 17 July 2016 at 02:07:19 UTC, pineapple wrote:
 On Sunday, 17 July 2016 at 02:03:52 UTC, Andrew Godfrey wrote:
 2) I wonder if an "uninitialized" feature would be worthwhile. 
 That is, a value you can initialize a variable to, equal to 
 'init', but that static analyzers know you don't mean to ever 
 use.
Don't we already have this in the form of int uninitialized_value = void; ?
No it's not the same - void initialization leaves the variable uninitialized. I'm saying, something that still initialized, but marks that initial value as not to be used. Anyway... given the existence of void initialization (which I'd forgotten about), what I suggested would be very confusing to add.
Jul 16 2016
parent Jacob Carlborg <doob me.com> writes:
On 2016-07-17 05:35, Andrew Godfrey wrote:

 No it's not the same - void initialization leaves the variable
 uninitialized. I'm saying, something that still initialized, but marks
 that initial value as not to be used. Anyway... given the existence of
 void initialization (which I'd forgotten about), what I suggested would
 be very confusing to add.
I think annotating a variable with a UDA would be perfect for this. The static analyzer would recognize the UDA and do the proper analyzes. -- /Jacob Carlborg
Jul 17 2016
prev sibling parent Walter Bright <newshound2 digitalmars.com> writes:
On 7/16/2016 7:03 PM, Andrew Godfrey wrote:
 I'm thinking:

 1) Static analysis tools still have relevance even in D code.
I agree, but their utility is greatly reduced, meaning the payback for the effort makes for a small benefit/cost ratio.
 2) I wonder if an "uninitialized" feature would be worthwhile. That is, a value
 you can initialize a variable to, equal to 'init', but that static analyzers
 know you don't mean to ever use.
There is the `= void;` initialization. A static analyzer could flag any attempts to use a void initialized variable/field that is live.
Jul 16 2016
prev sibling parent reply Kagamin <spam here.lot> writes:
On Saturday, 16 July 2016 at 21:52:02 UTC, Walter Bright wrote:
 I've seen SAL before, but have not studied it. My impression is 
 it is much more complex than necessary. For example,

   https://msdn.microsoft.com/en-us/library/hh916383.aspx

 describes annotations to memcpy(). I believe these are better 
 handled by use of dynamic arrays and transitive const.
I suppose in case of memcpy the compiler can catch (at the caller side) the case when the destination buffer has insufficient size, while D can catch it only at runtime. It's a contract expressed with a simple grammar.
Jul 18 2016
parent Walter Bright <newshound2 digitalmars.com> writes:
On 7/18/2016 5:06 AM, Kagamin wrote:
 On Saturday, 16 July 2016 at 21:52:02 UTC, Walter Bright wrote:
 I've seen SAL before, but have not studied it. My impression is it is much
 more complex than necessary. For example,

   https://msdn.microsoft.com/en-us/library/hh916383.aspx

 describes annotations to memcpy(). I believe these are better handled by use
 of dynamic arrays and transitive const.
I suppose in case of memcpy the compiler can catch (at the caller side) the case when the destination buffer has insufficient size, while D can catch it only at runtime. It's a contract expressed with a simple grammar.
Determining array bounds is the halting problem in the general case, and SAL doesn't solve that.
Jul 18 2016
prev sibling parent reply Shachar Shemesh <shachar weka.io> writes:
On 16/07/16 07:24, Walter Bright wrote:
 Since casting away immutable/const is allowed in  system code, yes, I am
 referring to  safe code here.
That is something without which none of your arguments made sense to me. Thank you for your clarification. So, would you say you shouldn't use D unless all of your code is safe? Most? Some? None? Shachar
Jul 15 2016
parent Walter Bright <newshound2 digitalmars.com> writes:
On 7/15/2016 11:12 PM, Shachar Shemesh wrote:
 So, would you say you shouldn't use D unless all of your code is  safe? Most?
 Some? None?
The idea is to minimize the use of system. If you've got a large team and large codebase, the use of system should merit special attention in code reviews, and should be in the purview of the more experienced programmers. There's way too much system in Phobos, and I expect most of it can be scrubbed out.
Jul 15 2016
prev sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 7/15/16 10:43 AM, Andrew Godfrey wrote:
 On Friday, 15 July 2016 at 11:09:24 UTC, Patrick Schluter wrote:
 On Friday, 15 July 2016 at 10:25:16 UTC, Shachar Shemesh wrote:

 means there are many scenarios in which I could put const in C++, and
 I simply can't in D, because something somewhere needs to be mutable.
Then it is not const and marking it as const is a bug. D enforces to not write a bug, what's wrong with that?
One example is if you make a class that has an internal cache of something. Updating or invalidating that cache has no logical effect on the externally-observable state of the class. So you should be able to modify the cache even on a 'const' object. This is not a bug and I've seen it have a huge effect on performance - probably a lot more than the const optimizations Walter is talking about here.
I suggest you take a look at http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2008/n2669.htm. It adds guarantees for STL containers that effectively prohibit them from using mutable. If they do use mutable, they are on their own in ensuring correctness. Also, although arguably all types should behave that way, there is no way to express something near "this user-defined type satisfies N2669" within the C++ type system. Also, N2669 encodes existing practice; the whole logical const and surreptitious caches inside apparently const objects is liable to bring more problems than it solves (see e.g. the std::string reference counting fiasco). -- Andrei
Jul 17 2016
parent Andrew Godfrey <X y.com> writes:
On Sunday, 17 July 2016 at 12:38:46 UTC, Andrei Alexandrescu 
wrote:
 On 7/15/16 10:43 AM, Andrew Godfrey wrote:
 On Friday, 15 July 2016 at 11:09:24 UTC, Patrick Schluter 
 wrote:
 On Friday, 15 July 2016 at 10:25:16 UTC, Shachar Shemesh 
 wrote:
 I think the one that hurts the most is fixing "C++ fault" 

 means there are many scenarios in which I could put const in 
 C++, and
 I simply can't in D, because something somewhere needs to be 
 mutable.
Then it is not const and marking it as const is a bug. D enforces to not write a bug, what's wrong with that?
One example is if you make a class that has an internal cache of something. Updating or invalidating that cache has no logical effect on the externally-observable state of the class. So you should be able to modify the cache even on a 'const' object. This is not a bug and I've seen it have a huge effect on performance - probably a lot more than the const optimizations Walter is talking about here.
I suggest you take a look at http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2008/n2669.htm. It adds guarantees for STL containers that effectively prohibit them from using mutable. If they do use mutable, they are on their own in ensuring correctness. Also, although arguably all types should behave that way, there is no way to express something near "this user-defined type satisfies N2669" within the C++ type system. Also, N2669 encodes existing practice; the whole logical const and surreptitious caches inside apparently const objects is liable to bring more problems than it solves (see e.g. the std::string reference counting fiasco). -- Andrei
It's certainly true that if I see "mutable" used in code, it catches my attention and engages my extreme skepticism. It is very hard to get right. Yet, in the handful of cases I've ever seen it used, the people who used it generally knew what they were doing and did get it right. And banning mutable in those situations would have caused a cascade of non-const reaching far up into the system, where it wasn't wanted and would remove important protections. I read N2669 and it doesn't "effectively prohibit" mutable as far as I can see. It does mean that to use any mutable state you'd need protection, such as locks, or lockfree trickery. Generally, I suspect that the only allowable externally-observable effect of using "mutable" is improved performance. But perhaps there is some other valid use that I just haven't encountered.
Jul 17 2016
prev sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 7/15/2016 3:25 AM, Shachar Shemesh wrote:
 On 15/07/16 13:13, Walter Bright wrote:

 1. no protection against casting away const and changing it anyway
 2. no protection against adding 'mutable' members and changing it anyway
 3. only good for one level, no way to specify a data structure of
 generic type T as const

 (and, sadly, not something D does very well)
Explain. D fixes C++ faults 1..3.
Yes, it does. And the result is that const is well defined, safe, and completely impractical to turn on. There are many many places I'd have the compiler enforce const correctness in C++, where in D I just gave up. In one of those places we even went as far as to add run time checks that no one inadvertently changed a buffer.
When we first introduced const to D, this was a common complaint. People were accustomed to the weaknesses of C++ const and misinterpreted it as a strength :-) But over time, D const won most over as being a better way, because it offered guarantees that C++ const simply does not. For one, it enables function purity.

are
 many scenarios in which I could put const in C++, and I simply can't in D,
 because something somewhere needs to be mutable.
That's the same argument that appeared when we introduced transitive const. But it means you can't do FP in C++. It means const doesn't work with generic types and generic algorithms. It means that const in a function signature tells you little to nothing unless it is applied to basic types.

 (though, at least if my suggestion is accepted, without throwing away the
 optimizations it allows).
Casting away const is only allowed in system code. I agree that we need an improved definition of what happens when const is cast away in system code, but in no case does it make things worse than in C++.
 In terms of optimizations, there are, indeed, cases where, had const
 not been
 removable, things could be optimized more. I don't think D has a right to
 complain about C++ in that regard, however.
Of course D does. I had to disable const optimizations in my C++ compiler, which is one of the motivations for the way const works in D.
For const, yes. In almost every other aspect of the language, however, D favors safety over performance. Just look at range checks, memory allocation, default values, and those are just the examples off the top of my head.
1. range checks - can be disabled by a compiler switch 2. memory allocation - D programmers can use any of C++'s allocation methods 3. default values - are removed by standard dead assignment optimizations, or can be disabled by initializing with '= void;' There is one opportunity for C++ that D eschews: taking advantage of undefined behavior on signed integer overflow to improve loops: http://blog.llvm.org/2011/05/what-every-c-programmer-should-know.html Practically speaking, optimizers are heavily built for and tuned for C++ semantics. Opportunities that arise due to the semantics of D are not exploited, but this isn't the fault of the core language and does not make C++ better. Opportunities for D that are not available in C++: 1. making use of const 2. making use of immutable 3. making use of function purity 4. making use of asserts to provide information to the optimizer
 I'm not saying that as a bad thing about D. It is a perfectly valid and
 reasonable trade off to make. I'm just saying D has no right to criticize C++
 for missed optimizations. People who live in glass houses should not throw
stones.
I think your argument there is completely destroyed :-)
Jul 15 2016
next sibling parent reply Jack Stouffer <jack jackstouffer.com> writes:
On Friday, 15 July 2016 at 19:06:15 UTC, Walter Bright wrote:
 4. making use of asserts to provide information to the optimizer
Do dmd/ldc/gdc actually do this?
Jul 15 2016
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 7/15/2016 12:55 PM, Jack Stouffer wrote:
 On Friday, 15 July 2016 at 19:06:15 UTC, Walter Bright wrote:
 4. making use of asserts to provide information to the optimizer
Do dmd/ldc/gdc actually do this?
dmd doesn't. I don't know about other compilers. The point is it's possible because C++ doesn't have asserts. C++ has an assert macro, defined to be the same as in C. The definition of assert in C is such that it is turned on/off with the NDEBUG macro, meaning that when it is off, the compiler CANNOT derive any semantic information from it, because it effectively vanishes from the code. In contrast, assert in D is a keyword and has a semantic production. Even if generating code for the assert is disabled with the -release switch, the semantics of it remain and are available to the optimizer. C++ didn't repeat that mistake with 'static_assert' (another feature copied from D), but static assert doesn't help the optimizer.
Jul 15 2016
parent Timon Gehr <timon.gehr gmx.ch> writes:
On 15.07.2016 22:29, Walter Bright wrote:
 On 7/15/2016 12:55 PM, Jack Stouffer wrote:
 On Friday, 15 July 2016 at 19:06:15 UTC, Walter Bright wrote:
 4. making use of asserts to provide information to the optimizer
Do dmd/ldc/gdc actually do this?
dmd doesn't. I don't know about other compilers. The point is it's possible because C++ doesn't have asserts. C++ has an assert macro, defined to be the same as in C. The definition of assert in C is such that it is turned on/off with the NDEBUG macro, meaning that when it is off, the compiler CANNOT derive any semantic information from it, because it effectively vanishes from the code. In contrast, assert in D is a keyword and has a semantic production. Even if generating code for the assert is disabled with the -release switch, the semantics of it remain and are available to the optimizer. C++ didn't repeat that mistake with 'static_assert' (another feature copied from D), but static assert doesn't help the optimizer.
Just to be explicit about this: What kind of "help" do you want to be provided to the optimizer? I.e., why didn't you say: "Failing assertions are undefined behavior with the -release switch."
Aug 05 2016
prev sibling parent reply Shachar Shemesh <shachar weka.io> writes:
On 15/07/16 22:06, Walter Bright wrote:
 2. memory allocation - D programmers can use any of C++'s allocation
 methods
Do enlighten me how to use intrusive reference counting in D. I am quite interested in the answer. Or, for that matter, tracking lifetime through an external linked list with an intrusive node structure. The first is impossible due to const casting rules, and the second adds the additional problem of being completely thwarted by D's move semantics. This is before mentioning how, unlike in D, alternate allocators are a fundamental part of C++'s standard library, or how it is not possible to throw exceptions without using the GC.
 I think your argument there is completely destroyed :-)
I do not understand the joy both you and Andrei express when you think you have "won" an "argument". This gives me the feeling that I'm not part of a process designed to make the language better, but rather part of an argument meant to prove to me that the language is fine the way it is. Not a great feeling, and not something that fosters confidence in the language's future direction. Shachar
Jul 15 2016
next sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 07/15/2016 04:58 PM, Shachar Shemesh wrote:
 I do not understand the joy both you and Andrei express when you think
 you have "won" an "argument". This gives me the feeling that I'm not
 part of a process designed to make the language better, but rather part
 of an argument meant to prove to me that the language is fine the way it
 is. Not a great feeling, and not something that fosters confidence in
 the language's future direction.
We should indeed improve that. Thanks! -- Andrei
Jul 15 2016
parent jmh530 <john.michael.hall gmail.com> writes:
On Friday, 15 July 2016 at 21:24:12 UTC, Andrei Alexandrescu 
wrote:
 On 07/15/2016 04:58 PM, Shachar Shemesh wrote:
 I do not understand the joy both you and Andrei express when 
 you think
 you have "won" an "argument". This gives me the feeling that 
 I'm not
 part of a process designed to make the language better, but 
 rather part
 of an argument meant to prove to me that the language is fine 
 the way it
 is. Not a great feeling, and not something that fosters 
 confidence in
 the language's future direction.
We should indeed improve that. Thanks! -- Andrei
Humbug, destroying someone's argument is one of the best things ever. Source: high school debater.
Jul 15 2016
prev sibling next sibling parent Walter Bright <newshound2 digitalmars.com> writes:
On 7/15/2016 1:58 PM, Shachar Shemesh wrote:
 I think your argument there is completely destroyed :-)
I do not understand the joy both you and Andrei express when you think you have "won" an "argument". This gives me the feeling that I'm not part of a process designed to make the language better, but rather part of an argument meant to prove to me that the language is fine the way it is. Not a great feeling, and not something that fosters confidence in the language's future direction.
It's just a gentle ribbing, as evidenced by the :-) Please don't read more into it than that, as none is intended.
Jul 15 2016
prev sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 7/15/2016 1:58 PM, Shachar Shemesh wrote:
 Do enlighten me how to use intrusive reference counting in D. I am quite
 interested in the answer.
Andrei and I are working on it. As he's expressed elsewhere, the idea is to maintain the reference count in memory that is outside the type system. It's meant to work hand-in-glove with the storage allocator.
Jul 15 2016
parent reply Shachar Shemesh <shachar weka.io> writes:
On 16/07/16 02:04, Walter Bright wrote:
 On 7/15/2016 1:58 PM, Shachar Shemesh wrote:
 Do enlighten me how to use intrusive reference counting in D. I am quite
 interested in the answer.
Andrei and I are working on it. As he's expressed elsewhere, the idea is to maintain the reference count in memory that is outside the type system. It's meant to work hand-in-glove with the storage allocator.
First of all, it sounds like you envision that everyone will solely be using the D supplied allocators, and no one will be writing their own. That's not my vision of how a system programming language is used. In fact, I have seen very few large scale projects where a custom allocator was not used. Either way, prefixing data to a structure is not what "intrusive" means. But even if this turns out to be an adequate replacement for all the cases in which I'd want to use intrusive reference counting in C++ (unlikely), that only works for my first example, not my second one. Shachar
Jul 15 2016
parent Walter Bright <newshound2 digitalmars.com> writes:
On 7/15/2016 11:28 PM, Shachar Shemesh wrote:
 First of all, it sounds like you envision that everyone will solely be using
the
 D supplied allocators, and no one will be writing their own.
There won't be anything stopping anyone from writing their own allocators, just like there's nothing stopping one from writing their own sine and cosine functions. I'm well aware that systems programmers like to write their own allocators.
 But even if this turns out to be an adequate replacement for all the cases in
 which I'd want to use intrusive reference counting in C++ (unlikely), that only
 works for my first example, not my second one.
You mean move semantics? You can't move anything if there are existing pointers to it that can't be changed automatically.
Jul 15 2016
prev sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 07/14/2016 12:17 PM, Jesse Phillips wrote:
 On Tuesday, 12 July 2016 at 05:15:09 UTC, Shachar Shemesh wrote:
 C++ fully defines when it is okay to cast away constness, gives you
 aids so that you know that that's what you are doing, and nothing
 else, and gives you a method by which you can do it without a cast if
 the circumstances support it.

 D says any such cast is UB.

 Shachar
Yeah C++ defines how you can modify const data after saying you can never modify data from a const qualified access path. §7.1.​6.1/3[1] I still haven't found someone who can explain how C++ can define the behavior of modifying a variable after casting away const. Sure it says that if the original object was mutable (not stored in ROM) than you can modify it, but that is true of D as well, but the language doesn't know the object is not stored in ROM so it can't tell you what it will do when you try to modify it, only you can. 1. http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2014/n4296.pdf
Getting back to D, a more appropriate definition that gives us enough flexibility to implement allocators etc. has to take time into account. Something like the following: "During and after mutating a memory location typed as (unqualified) type T, no thread in the program (including the current thread) is allowed to effect a read of the same location typed as shared(T) or immutable(T)." This allows us to implement portably allocators that mutate formerly immutable data during deallocation. Andrei
Jul 15 2016
parent reply deadalnix <deadalnix gmail.com> writes:
On Friday, 15 July 2016 at 14:45:41 UTC, Andrei Alexandrescu 
wrote:
 On 07/14/2016 12:17 PM, Jesse Phillips wrote:
 On Tuesday, 12 July 2016 at 05:15:09 UTC, Shachar Shemesh 
 wrote:
 C++ fully defines when it is okay to cast away constness, 
 gives you
 aids so that you know that that's what you are doing, and 
 nothing
 else, and gives you a method by which you can do it without a 
 cast if
 the circumstances support it.

 D says any such cast is UB.

 Shachar
Yeah C++ defines how you can modify const data after saying you can never modify data from a const qualified access path. §7.1.​6.1/3[1] I still haven't found someone who can explain how C++ can define the behavior of modifying a variable after casting away const. Sure it says that if the original object was mutable (not stored in ROM) than you can modify it, but that is true of D as well, but the language doesn't know the object is not stored in ROM so it can't tell you what it will do when you try to modify it, only you can. 1. http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2014/n4296.pdf
Getting back to D, a more appropriate definition that gives us enough flexibility to implement allocators etc. has to take time into account. Something like the following: "During and after mutating a memory location typed as (unqualified) type T, no thread in the program (including the current thread) is allowed to effect a read of the same location typed as shared(T) or immutable(T)." This allows us to implement portably allocators that mutate formerly immutable data during deallocation. Andrei
Read or write. For const(T) , same thing, but limited to write. Everything else is UB, as it is already UB at the hardware level.
Jul 15 2016
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 07/15/2016 01:27 PM, deadalnix wrote:
 On Friday, 15 July 2016 at 14:45:41 UTC, Andrei Alexandrescu wrote:
 On 07/14/2016 12:17 PM, Jesse Phillips wrote:
 On Tuesday, 12 July 2016 at 05:15:09 UTC, Shachar Shemesh wrote:
 C++ fully defines when it is okay to cast away constness, gives you
 aids so that you know that that's what you are doing, and nothing
 else, and gives you a method by which you can do it without a cast if
 the circumstances support it.

 D says any such cast is UB.

 Shachar
Yeah C++ defines how you can modify const data after saying you can never modify data from a const qualified access path. §7.1.​6.1/3[1] I still haven't found someone who can explain how C++ can define the behavior of modifying a variable after casting away const. Sure it says that if the original object was mutable (not stored in ROM) than you can modify it, but that is true of D as well, but the language doesn't know the object is not stored in ROM so it can't tell you what it will do when you try to modify it, only you can. 1. http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2014/n4296.pdf
Getting back to D, a more appropriate definition that gives us enough flexibility to implement allocators etc. has to take time into account. Something like the following: "During and after mutating a memory location typed as (unqualified) type T, no thread in the program (including the current thread) is allowed to effect a read of the same location typed as shared(T) or immutable(T)." This allows us to implement portably allocators that mutate formerly immutable data during deallocation. Andrei
Read or write. For const(T) , same thing, but limited to write.
Thanks. Reworked: "During and after mutating a memory location typed as (unqualified) type T, no thread in the program (including the current thread) is allowed to (a) effect a read of the same location typed as const(T) or immutable(T), or (b) effect a read or write of the same location typed as shared(T)." Andrei
Jul 15 2016
parent deadalnix <deadalnix gmail.com> writes:
On Friday, 15 July 2016 at 18:01:43 UTC, Andrei Alexandrescu 
wrote:
 Read or write.

 For const(T) , same thing, but limited to write.
Thanks. Reworked: "During and after mutating a memory location typed as (unqualified) type T, no thread in the program (including the current thread) is allowed to (a) effect a read of the same location typed as const(T) or immutable(T), or (b) effect a read or write of the same location typed as shared(T)." Andrei
I think the idea is there, but there is still a problem : "During and after" do not have any meaning without ordering constraint/memory barrier.
Jul 15 2016
prev sibling parent ag0aep6g <anonymous example.com> writes:
On 07/08/2016 08:42 PM, deadalnix wrote:
 It is meaningless because sometime, you have A and B that are both safe
 on their own, but doing both is unsafe. In which case A or B need to be
 banned, but nothing allows to know which one.
Would you mind giving an example? Purely to educate me.
 This isn't a bug, this is
 a failure to have a principled approach to safety.
The principled approach would have been to start with an empty set of features in safe and then add stuff after verifying that it's safe on its own and in combination with what's already there. Right? If that's it, then that does seem better to me, yeah. But I'd say that the unprincipled approach has led to bugs (or maybe call it holes in the spec). And what I tried to say is that the dictatorship at least acknowledges those bugs/holes and agrees that they need fixing. Whereas they're apparently just fine with the silly limitation of alias parameters that you pointed out.
 The position is inconsistent because the dictatorship refuses to
 compromise on mutually exclusive goals. For instance,  safe is defined
 as ensuring memory safety. But not against undefined behaviors (in fact
 Walter promote the use of UB in various situations, for instance when it
 comes to shared). You CANNOT have undefined behavior that are defined as
 being memory safe.
What you say makes sense to me. Seems silly when the compiler guards me against memory-safety errors but not against undefined behavior.
Jul 08 2016
prev sibling next sibling parent QAston <qaston gmail.com> writes:
On Friday, 8 July 2016 at 00:56:25 UTC, deadalnix wrote:
 While this very true, it is clear that most D's complexity 
 doesn't come from there. D's complexity come for the most part 
 from things being completely unprincipled and lack of vision.
But that way is PRAGMATIC, don't you know about that? It's better to solve similar design issues in 5 places differently based on current need and then live with the problems caused forever after.
 Except that it is not the case. D fucks up orthogonality 
 everytime it has the opportunity.
Well, there are some things kept orthogonal. Runtime and compiletime parameters are not mushed together for example. But the situation does not improve, relevant thread: http://forum.dlang.org/post/nkjjc0$1oht$1 digitalmars.com
 As a result, there is a ton of useless knowledge that has to be 
 accumulated.
Wanna use reflection? It's simple, just see template type destructuring syntax, is expression, __traits, std.traits, special builtin properties and functionlike-operators.
 For instance, you'd expect that

 template Foo(T...) {}

 would take a variadic number of type as arguments, while

 template Foo(alias T...) {}

 would take a variadic number of aliases. But no, 1 take a 
 variadic number of aliases and 2/ is invalid. So now we've 
 created a situation where it is now impossible to define 
 variadic, alias and parameter/argument as 3 simple, separate 
 things, but as a whole blob of arbitrary decisions.
Oh, but the current way saves 6 characters. And was what was needed at the time of implemeting it.
Jul 08 2016
prev sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 7/7/2016 5:56 PM, deadalnix wrote:
 While this very true, it is clear that most D's complexity doesn't come from
 there. D's complexity come for the most part from things being completely
 unprincipled and lack of vision.
All useful computer languages are unprincipled and complex due to a number of factors: 1. the underlying computer is unprincipled and complex (well known issues with integer and floating point arithmetic) 2. what programmers perceive as logical and intuitive is often neither logical nor intuitive to a computer (even Haskell has wackadoodle features to cater to illogical programmers) 3. what the language needs to do changes over time - the programming world is hardly static 4. new features tend to be added as adaptations of existing features (much like how evolution works) 5. new features have to be worked in without excessively breaking legacy compatibility 6. no language is conceived of as a whole and then implemented 7. the language designers are idiots and make mistakes Of course, we try to minimize (7), but 1..6 are inevitable.
Jul 08 2016
next sibling parent reply Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= writes:
On Friday, 8 July 2016 at 21:24:04 UTC, Walter Bright wrote:
 On 7/7/2016 5:56 PM, deadalnix wrote:
 While this very true, it is clear that most D's complexity 
 doesn't come from
 there. D's complexity come for the most part from things being 
 completely
 unprincipled and lack of vision.
All useful computer languages are unprincipled and complex due to a number of factors:
I think this is a very dangerous assumption. And also not true. What is true is that it is difficult to gain traction if a language does not look like a copy of a pre-existing and fairly popular language.
Jul 08 2016
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 7/8/2016 2:58 PM, Ola Fosheim Grøstad wrote:
 On Friday, 8 July 2016 at 21:24:04 UTC, Walter Bright wrote:
 On 7/7/2016 5:56 PM, deadalnix wrote:
 While this very true, it is clear that most D's complexity doesn't come from
 there. D's complexity come for the most part from things being completely
 unprincipled and lack of vision.
All useful computer languages are unprincipled and complex due to a number of factors:
I think this is a very dangerous assumption. And also not true.
Feel free to post a counterexample. All you need is one!
 What is true is that it is difficult to gain traction if a language does not
 look like a copy of a pre-existing and fairly popular language.
"what programmers perceive as logical and intuitive is often neither logical nor intuitive to a computer"
Jul 08 2016
next sibling parent reply Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= writes:
On Saturday, 9 July 2016 at 00:14:34 UTC, Walter Bright wrote:
 On 7/8/2016 2:58 PM, Ola Fosheim Grøstad wrote:
 On Friday, 8 July 2016 at 21:24:04 UTC, Walter Bright wrote:
 All useful computer languages are unprincipled and complex 
 due to a number of
 factors:
I think this is a very dangerous assumption. And also not true.
Feel free to post a counterexample. All you need is one!
Scheme.
 What is true is that it is difficult to gain traction if a 
 language does not
 look like a copy of a pre-existing and fairly popular language.
"what programmers perceive as logical and intuitive is often neither logical nor intuitive to a computer"
I don't understand what you mean by this. If they are programmers they should know the von Neumann architecture. I don't think that is the same as having a strong preference for what they already know...
Jul 09 2016
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 7/9/2016 12:37 AM, Ola Fosheim Grøstad wrote:
 On Saturday, 9 July 2016 at 00:14:34 UTC, Walter Bright wrote:
 On 7/8/2016 2:58 PM, Ola Fosheim Grøstad wrote:
 On Friday, 8 July 2016 at 21:24:04 UTC, Walter Bright wrote:
 All useful computer languages are unprincipled and complex due to a number of
 factors:
I think this is a very dangerous assumption. And also not true.
Feel free to post a counterexample. All you need is one!
Scheme.
I know little about Scheme, so I googled it. https://en.wikipedia.org/wiki/Scheme_(programming_language) And the money shot: "The elegant, minimalist design has made Scheme a popular target for language designers, hobbyists, and educators, and because of its small size, that of a typical interpreter, it is also a popular choice for embedded systems and scripting. This has resulted in scores of implementations, most of which differ from each other so much that porting programs from one implementation to another is quite difficult, and the small size of the standard language means that writing a useful program of any great complexity in standard, portable Scheme is almost impossible." Seems that in order to make it useful, users had to extend it. This doesn't fit the criteria. Wirth's Pascal had the same problem. He invented an elegant, simple, consistent, and useless language. The usable Pascal systems all had a boatload of dirty, incompatible extensions.
 What is true is that it is difficult to gain traction if a language does not
 look like a copy of a pre-existing and fairly popular language.
"what programmers perceive as logical and intuitive is often neither logical nor intuitive to a computer"
I don't understand what you mean by this.
What programmers think of as "intuitive" is often a collection of special cases.
Jul 09 2016
parent reply Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= writes:
On Saturday, 9 July 2016 at 08:39:10 UTC, Walter Bright wrote:
 Seems that in order to make it useful, users had to extend it. 
 This doesn't fit the criteria.
Scheme is a simple functional language which is easy to extend. Why would you conflate "useful" with "used for writing complex programs"? Anyway, there are many other examples, but less known.
 Wirth's Pascal had the same problem. He invented an elegant, 
 simple, consistent, and useless language. The usable Pascal 
 systems all had a boatload of dirty, incompatible extensions.
I am not sure if Pascal is elegant, but it most certainly is useful. So I don't think I agree with your definition of "useful".
 What programmers think of as "intuitive" is often a collection 
 of special cases.
I think I would need examples to understand what you mean here.
Jul 09 2016
next sibling parent reply Meta <jared771 gmail.com> writes:
On Sunday, 10 July 2016 at 02:44:14 UTC, Ola Fosheim Grøstad 
wrote:
 On Saturday, 9 July 2016 at 08:39:10 UTC, Walter Bright wrote:
 Seems that in order to make it useful, users had to extend it. 
 This doesn't fit the criteria.
Scheme is a simple functional language which is easy to extend. Why would you conflate "useful" with "used for writing complex programs"? Anyway, there are many other examples, but less known.
 Wirth's Pascal had the same problem. He invented an elegant, 
 simple, consistent, and useless language. The usable Pascal 
 systems all had a boatload of dirty, incompatible extensions.
I am not sure if Pascal is elegant, but it most certainly is useful. So I don't think I agree with your definition of "useful".
 What programmers think of as "intuitive" is often a collection 
 of special cases.
I think I would need examples to understand what you mean here.
I agree with Walter here. Scheme is not a language that you can generally do useful things in. If you want to do anything non-trivial, you switch to Racket (which is not as minimalistic and "pure" as Scheme).
Jul 09 2016
parent Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= writes:
On Sunday, 10 July 2016 at 04:13:07 UTC, Meta wrote:
 I agree with Walter here. Scheme is not a language that you can 
 generally do useful things in. If you want to do anything 
 non-trivial, you switch to Racket (which is not as minimalistic 
 and "pure" as Scheme).
Define what you mean by a useful and clean language? I take useful to mean that it has been used for useful programming. And clean that it follows a principled and coherent design. Both Scheme and Pascal are useful. Scheme is also elegant. For any reasonable definition of "useful" and "elegant". If you compare D to other languages you'll find a wide array of languages that are useful and way more principled than D. "useful" is not a good excuse for not cleaning up a develpment environment.
Jul 10 2016
prev sibling next sibling parent reply Patrick Schluter <Patrick.Schluter bbox.fr> writes:
On Sunday, 10 July 2016 at 02:44:14 UTC, Ola Fosheim Grøstad 
wrote:
 On Saturday, 9 July 2016 at 08:39:10 UTC, Walter Bright wrote:
 Seems that in order to make it useful, users had to extend it. 
 This doesn't fit the criteria.
Scheme is a simple functional language which is easy to extend. Why would you conflate "useful" with "used for writing complex programs"? Anyway, there are many other examples, but less known.
 Wirth's Pascal had the same problem. He invented an elegant, 
 simple, consistent, and useless language. The usable Pascal 
 systems all had a boatload of dirty, incompatible extensions.
I am not sure if Pascal is elegant, but it most certainly is useful. So I don't think I agree with your definition of "useful".
Original Pascal was useless. You could not even have separate compilation, there were no strings (packed array of chars of fixed size two arrays of differing size were not compatible, so impossible to write a procedure or a function without defining them for all possible packed array sizes). It became useful thanks to the extensions added by UCSD (units) and the by Turbo Pascal (strings).
Jul 10 2016
parent Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= writes:
On Sunday, 10 July 2016 at 08:03:03 UTC, Patrick Schluter wrote:
 Original Pascal was useless. You could not even have separate 
 compilation, there were no strings (packed array of chars of 
 fixed size two arrays of differing size were not compatible, so 
 impossible to write a procedure or a function without defining 
 them for all possible packed array sizes). It became useful 
 thanks to the extensions added by UCSD (units) and the by Turbo 
 Pascal (strings).
I've never used the original Pascal, but Pascal dialects were widely implemented and very useful for memory constrained devices. Having a fixed memory layout was common and useful for memory constrained devices. On such devices you use the diskette for storage and manually page in/out disk sectors, or simply live with fixed limits, which is typically better than frequent out-of-memory situations.
Jul 10 2016
prev sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 7/9/2016 7:44 PM, Ola Fosheim Grøstad wrote:
 On Saturday, 9 July 2016 at 08:39:10 UTC, Walter Bright wrote:
 Seems that in order to make it useful, users had to extend it. This
 doesn't fit the criteria.
Scheme is a simple functional language which is easy to extend.
If they have to extend it, it isn't Scheme anymore. I bet you don't use Scheme, either.
 Wirth's Pascal had the same problem. He invented an elegant, simple,
 consistent, and useless language. The usable Pascal systems all had a
 boatload of dirty, incompatible extensions.
I am not sure if Pascal is elegant, but it most certainly is useful.
The original Pascal, which you said you'd never used. I have.
 So I don't think I agree with your definition of "useful".
Try and write a program in Wirth's Pascal that reads a character from the keyboard.
 What programmers think of as "intuitive" is often a collection of
 special cases.
I think I would need examples to understand what you mean here.
Dangling else is a classic. < > for template parameters in C++. infix notation
Jul 10 2016
next sibling parent reply Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= writes:
On Sunday, 10 July 2016 at 11:21:49 UTC, Walter Bright wrote:
 On 7/9/2016 7:44 PM, Ola Fosheim Grøstad wrote:
 On Saturday, 9 July 2016 at 08:39:10 UTC, Walter Bright wrote:
 Seems that in order to make it useful, users had to extend 
 it. This
 doesn't fit the criteria.
Scheme is a simple functional language which is easy to extend.
If they have to extend it, it isn't Scheme anymore.
Uh, well in that case there is no C++ at all. And we might as well say that gdc and ldc aren't D compilers either.
 The original Pascal, which you said you'd never used. I have.
I've used the subset, but not Wirth's original. Not that this is an argument for anything.
 So I don't think I agree with your definition of "useful".
Try and write a program in Wirth's Pascal that reads a character from the keyboard.
D has no _language_ support for I/O, not sure what the point is.
 What programmers think of as "intuitive" is often a 
 collection of
 special cases.
I think I would need examples to understand what you mean here.
Dangling else is a classic. < > for template parameters in C++. infix notation
Ok. Those are syntactic conventions. Does not affect the language design as such.
Jul 10 2016
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 7/10/2016 7:54 AM, Ola Fosheim Grøstad wrote:
 Ok. Those are syntactic conventions.
You're changing the subject.
 Does not affect the language design
 as such.
And changing the subject again. Face it, your argument is destroyed :-)
Jul 10 2016
parent reply Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= writes:
On Sunday, 10 July 2016 at 21:35:16 UTC, Walter Bright wrote:
 On 7/10/2016 7:54 AM, Ola Fosheim Grøstad wrote:
 Ok. Those are syntactic conventions.
You're changing the subject.
What? Nope, but let's stick to what most people evaluate: the core language. Syntax isn't really the big blocker. Yes, it may be sufficient to annoy some people, but it is when the core language is different that programmers get serious problems. Some examples of somewhat elegant languages, that also are useful: Beta, everything is a pattern or an instance of a pattern. Self, everything is an object. Prolog, everything is a horn clause. Scheme, everything is a list. All of these are useful languages, but programmers have trouble getting away from the semantic model they have of how programs should be structured. Btw, C++ is increasingly moving towards the Beta model: everything is a class-object (including lambdas), but it is too late for C++ to get anywhere close to elegance.
 Face it, your argument is destroyed :-)
Of course not. Consistency and simplicity is not undermining usefulness. The core language should be simple. It has many advantages and is what most language designers strive for, unfortunately the understanding of what the core language ought to be often come too late.
Jul 10 2016
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 7/10/2016 10:07 PM, Ola Fosheim Grøstad wrote:
 Face it, your argument is destroyed :-)
Of course not.
Trying to reparse and reframe your answers isn't going to help. I know all those rhetorical tricks <g>. I wrote:
 All useful computer languages are unprincipled and complex due to a 
number of factors: [...] to which you replied:
 not true
But there are no examples of such a language that doesn't fail at one or more of the factors, Scheme included. Not Prolog either, a singularly useless, obscure and failed language. You could come up with lists of ever more obscure languages, but that just adds to the destruction of your argument. The fact that other languages like C++ are adopting feature after feature from D proves that there's a lot of dazz in D!
Jul 11 2016
next sibling parent reply sarn <sarn theartofmachinery.com> writes:
On Monday, 11 July 2016 at 22:09:11 UTC, Walter Bright wrote:
 On 7/10/2016 10:07 PM, Ola Fosheim Grøstad wrote:
[Snip stuff about Scheme] Scheme is a really nice, elegant language that's fun to hack with, but at the end of the day, if people were writing Nginx, or the Windows kernel, or HFT systems in Scheme, you can bet programmers would be pushing pretty hard for special exceptions and hooks and stuff for better performance or lower-level access, and eventually you'd end up with another C. Walter said "all programming languages", but he's obviously referring to the programming market D is in.
Jul 11 2016
next sibling parent deadalnix <deadalnix gmail.com> writes:
On Tuesday, 12 July 2016 at 00:34:12 UTC, sarn wrote:
 On Monday, 11 July 2016 at 22:09:11 UTC, Walter Bright wrote:
 On 7/10/2016 10:07 PM, Ola Fosheim Grøstad wrote:
[Snip stuff about Scheme] Scheme is a really nice, elegant language that's fun to hack with, but at the end of the day, if people were writing Nginx, or the Windows kernel, or HFT systems in Scheme, you can bet programmers would be pushing pretty hard for special exceptions and hooks and stuff for better performance or lower-level access, and eventually you'd end up with another C. Walter said "all programming languages", but he's obviously referring to the programming market D is in.
This is a false dichotomy. Nobody says there should be no inconsistencies. Sometime, it is just necessary because of other concerns. But it is a cost, and, like all costs, must pay for itself. Most of these do not pay for themselves in D.
Jul 11 2016
prev sibling next sibling parent Walter Bright <newshound2 digitalmars.com> writes:
On 7/11/2016 5:34 PM, sarn wrote:
 Walter said "all programming languages", but he's obviously referring to
 the programming market D is in.
I said "all USEFUL programming languages", thereby excluding toys, research projects, etc. Of course, "useful" is a slippery concept, but a good proxy is having widespread adoption. There are good reasons why Detroit's "concept cars" never survived intact into production. And why the windscreen on an airliner is flat instead of curved like the rest of the exterior.
Jul 11 2016
prev sibling parent reply Guillaume Piolat <first.last gmail.com> writes:
On Tuesday, 12 July 2016 at 00:34:12 UTC, sarn wrote:
 Scheme is a really nice, elegant language that's fun to hack 
 with, but at the end of the day, if people were writing Nginx, 
 or the Windows kernel, or HFT systems in Scheme, you can bet 
 programmers would be pushing pretty hard for special exceptions 
 and hooks and stuff for better performance or lower-level 
 access, and eventually you'd end up with another C.
Honestly after writing a toy Scheme interpreter, I've come to think much more highly of Javascript.
Jul 12 2016
parent Walter Bright <newshound2 digitalmars.com> writes:
On 7/12/2016 3:50 AM, Guillaume Piolat wrote:
 Honestly after writing a toy Scheme interpreter, I've come to think much
 more highly of Javascript.
Javascript isn't quite as easy to implement as it looks :-)
Jul 12 2016
prev sibling parent reply Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= writes:
On Monday, 11 July 2016 at 22:09:11 UTC, Walter Bright wrote:
 at one or more of the factors, Scheme included. Not Prolog 
 either, a singularly useless, obscure and failed language.
Err... Prolog is in use and has been far more influential on the state of art than C++ or D ever will. I think this discussion is dead...
Jul 11 2016
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 7/11/2016 7:46 PM, Ola Fosheim Grøstad wrote:
 On Monday, 11 July 2016 at 22:09:11 UTC, Walter Bright wrote:
 at one or more of the factors, Scheme included. Not Prolog either, a
 singularly useless, obscure and failed language.
Err... Prolog is in use and has been far more influential on the state of art
"Prolog and other logic programming languages have not had a significant impact on the computer industry in general." https://en.wikipedia.org/wiki/Prolog#Limitations So, no.
 than C++ or D ever will.
I'm afraid that is seriously mistaken about C++'s influence on the state of the art, in particular compile time polymorphism and the work of Igor Stepanov, and D's subsequent influence on C++. Also, although C++ did not invent OOP, OOP's late 1980s surge in use, popularity, and yes, *influence* was due entirely to C++. I was there, in the center of that storm. Other languages fell all over themselves to abilities to C++'s influence. Even Fortran got on the bandwagon.
Jul 11 2016
parent reply Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= writes:
On Tuesday, 12 July 2016 at 04:52:14 UTC, Walter Bright wrote:
 "Prolog and other logic programming languages have not had a 
 significant impact on the computer industry in general."

   https://en.wikipedia.org/wiki/Prolog#Limitations

 So, no.
That appears to be a 1995 reference from a logic programming languages conference. Of course logic programming has had a big impact on state of the art. Prolog -> Datalog Datalog -> magic sets magic sets -> inference engines inference engines -> static analysis And that is only a small part of it.
 I'm afraid that is seriously mistaken about C++'s influence on 
 the state of the art, in particular compile time polymorphism
Nah. You are confusing state-of-the-art with widespread system support.
 Also, although C++ did not invent OOP, OOP's late 1980s surge 
 in use, popularity, and yes, *influence* was due entirely to
In commercial application development sure. In terms of OOP principles and implementation, hell no.
Jul 11 2016
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 7/11/2016 10:01 PM, Ola Fosheim Grøstad wrote:
 Of course logic programming has had a big impact on state of
 the art.

 Prolog -> Datalog
 Datalog -> magic sets
 magic sets -> inference engines
 inference engines  -> static analysis

 And that is only a small part of it.
Can you trace any Prolog innovations in
Jul 11 2016
parent reply Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= writes:
On Tuesday, 12 July 2016 at 05:30:57 UTC, Walter Bright wrote:
 On 7/11/2016 10:01 PM, Ola Fosheim Grøstad wrote:
 Of course logic programming has had a big impact on state of
 the art.

 Prolog -> Datalog
 Datalog -> magic sets
 magic sets -> inference engines
 inference engines  -> static analysis

 And that is only a small part of it.
Can you trace any Prolog innovations in
I think you are taking the wrong view here. Logic programming is a generalized version of functional programming where you can have complex expressions in the left hand side. It is basically unification: https://en.wikipedia.org/wiki/Unification_(computer_science)#Application:_Unification_in_logic_programming So yes, many languages are drawing on those principles in their type systems.
Jul 11 2016
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 7/11/2016 10:42 PM, Ola Fosheim Grøstad wrote:
 So yes, many languages are drawing on those principles in their type
 systems.
D draws many features that come from functional programming. But I am not aware of any that come specifically from Prolog. And just to be clear, D aims to be a useful programming language. It is not intended as a vehicle for programming language research. It is also entirely possible for a research language to advance the state of the art (i.e. programming language theory), but not be useful. That is not what D is about, though. D's job is to get s*** done quickly and effiently and make money for the businesses that use it.
Jul 11 2016
parent reply Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= writes:
On Tuesday, 12 July 2016 at 06:29:36 UTC, Walter Bright wrote:
 And just to be clear, D aims to be a useful programming 
 language. It is not intended as a vehicle for programming 
 language research.
Neither was Prolog. It is used for useful programming as well as education (e.g. teaching unification to students). It isn't suited for programming-in-the-large, but that doesn't make it useless. And to be frank D's symbol resolution isn't suitable for programming-in-the-large either. Of course, Prolog is old and there are also other alternatives for various types of problem solving, but the fact that almost every CS student have some understanding of Prolog unification makes it very influential.
Jul 12 2016
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 7/12/2016 1:41 AM, Ola Fosheim Grøstad wrote:
 And to be frank D's symbol resolution isn't suitable for
programming-in-the-large
 either.
Explain.
 teaching
Frictionless masses are useful for teaching engineering, but are not useful in the real world, which tends to be complicated and dirty, just like useful programming languages.
 Of course, Prolog is old and there are also other alternatives for
 various types of problem solving, but the fact that almost every CS
 student have some understanding of Prolog unification makes it very
 influential.
I asked for one feature originating in Prolog that made its way into mainstream languages. You dismissed C++'s enormous influence in getting languages to adopt OOP, but defend Prolog influencing others with unification.
Jul 12 2016
parent Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= writes:
On Tuesday, 12 July 2016 at 10:44:56 UTC, Walter Bright wrote:
 On 7/12/2016 1:41 AM, Ola Fosheim Grøstad wrote:
 And to be frank D's symbol resolution isn't suitable for 
 programming-in-the-large
 either.
Explain.
http://forum.dlang.org/thread/skqcudmkvqtejmofxoim forum.dlang.org
 Frictionless masses are useful for teaching engineering, but 
 are not useful in the real world, which tends to be complicated 
 and dirty, just like useful programming languages.
Languages sometimes get complicated and dirty when they are "patched up" with the requirement that they should not break existing code. C++ and Objective-C are such languages, and the source is both C and lack of initial design considerations. However, your claim that Prolog has not been useful in the real world is silly. You are making some unstated assumptions about what «useful» means. There are plenty of expert-systems based upon Prolog. There are plenty of problems that would be much easier to solve in Prolog than in D, and vice versa.
 I asked for one feature originating in Prolog that made its way 
 into mainstream languages.
No you didn't. Unification is Prolog's main feature. C++ template matching uses unification.
 You dismissed C++'s enormous influence in getting languages to 
 adopt OOP
Sure, _anyone_ with any kind of education in computing since the 80s would have learned what OO was way before C++ got mainstream around 1990. C++ got OO into mainstream application development, that's different. There were plenty of OO languages around before that event.
Jul 12 2016
prev sibling parent reply burjui <bytefu gmail.com> writes:
On Sunday, 10 July 2016 at 11:21:49 UTC, Walter Bright wrote:
 On 7/9/2016 7:44 PM, Ola Fosheim Grøstad wrote:
 Scheme is a simple functional language which is easy to extend.
If they have to extend it, it isn't Scheme anymore.
You misunderstand the meaning of "extend" in respect to Scheme due to your lack of experience with it. Macros are the way of extending Scheme, you don't need to hack the compiler for that. From Wikipedia: --------------- Invocations of macros and procedures bear a close resemblance — both are s-expressions — but they are treated differently. When the compiler encounters an s-expression in the program, it first checks to see if the symbol is defined as a syntactic keyword within the current lexical scope. If so, it then attempts to expand the macro, treating the items in the tail of the s-expression as arguments without compiling code to evaluate them, and this process is repeated recursively until no macro invocations remain. If it is not a syntactic keyword, the compiler compiles code to evaluate the arguments in the tail of the s-expression and then to evaluate the variable represented by the symbol at the head of the s-expression and call it as a procedure with the evaluated tail expressions passed as actual arguments to it. --------------- For example, an "if expression" is written as follows: ; Returns either settings or an error, depending on the condition (if (authenticated) (load-settings) (error "cannot load settings, authentication required")) Either branch is an expression, and the false-branch can be omitted (then a Scheme's "null" equivalent will be returned instead). If you need a "block", a sequence of expressions, you could write this: (if (authenticated) (begin (display "loading settings") (load-settings)) (begin (display "something went wrong") (error "cannot load settings, authentication required"))) When you specify true-branch only, it's tedious to wrap your sequence in "begin" expression. But you can write a "when" macro, which takes a condition and a sequence of expressions and generates the code for you: (define-syntax when (syntax-rules () ((when pred exp exps ...) (if pred (begin exp exps ...))))) Now you can use it just as an ordinary "if": (when (authenticated) (save-settings) (display "Saved settings")) What about the false-branch-only "if"? (define-syntax unless (syntax-rules () ((unless pred exp exps ...) (if (not pred) (begin exp exps ...))))) (unless (dead) (display "walking") (walk)) The only syntax Scheme has is S-expressions, which are used to represent both data and code, so there's nothing to be extended in the language itself. You just write a macro that generates the code you want. Macros are effectively AST transformers, it just happens so that in Scheme everything is represented in S-expressions, so the code you write is already the AST. So if you "extend" Scheme by writing a macro, it's still Scheme. You can think of macros as of D string mixins, but without the ugly stringiness.
Jul 10 2016
parent Walter Bright <newshound2 digitalmars.com> writes:
On 7/10/2016 11:40 AM, burjui wrote:
 On Sunday, 10 July 2016 at 11:21:49 UTC, Walter Bright wrote:
 On 7/9/2016 7:44 PM, Ola Fosheim Grøstad wrote:
 Scheme is a simple functional language which is easy to extend.
If they have to extend it, it isn't Scheme anymore.
You misunderstand the meaning of "extend" in respect to Scheme due to your lack of experience with it. Macros are the way of extending Scheme, you don't need to hack the compiler for that.
I don't know Scheme, but macros are not really extending the language. The Wikipedia article suggested much more, as in non-portable extensions and multiple dialects, not just macros.
Jul 10 2016
prev sibling next sibling parent reply deadalnix <deadalnix gmail.com> writes:
On Saturday, 9 July 2016 at 00:14:34 UTC, Walter Bright wrote:
 On 7/8/2016 2:58 PM, Ola Fosheim Grøstad wrote:
 On Friday, 8 July 2016 at 21:24:04 UTC, Walter Bright wrote:
 On 7/7/2016 5:56 PM, deadalnix wrote:
 While this very true, it is clear that most D's complexity 
 doesn't come from
 there. D's complexity come for the most part from things 
 being completely
 unprincipled and lack of vision.
All useful computer languages are unprincipled and complex due to a number of factors:
I think this is a very dangerous assumption. And also not true.
Feel free to post a counterexample. All you need is one!
Lisp.
 What is true is that it is difficult to gain traction if a 
 language does not
 look like a copy of a pre-existing and fairly popular language.
"what programmers perceive as logical and intuitive is often neither logical nor intuitive to a computer"
That's why we have compiler writer and language designers.
Jul 11 2016
parent Jack Stouffer <jack jackstouffer.com> writes:
On Monday, 11 July 2016 at 18:18:22 UTC, deadalnix wrote:
 Lisp.
Which one?
Jul 11 2016
prev sibling parent reply Nobody <nobody gmail.com> writes:
On Saturday, 9 July 2016 at 00:14:34 UTC, Walter Bright wrote:
 On 7/8/2016 2:58 PM, Ola Fosheim Grøstad wrote:
 On Friday, 8 July 2016 at 21:24:04 UTC, Walter Bright wrote:
 On 7/7/2016 5:56 PM, deadalnix wrote:
 While this very true, it is clear that most D's complexity 
 doesn't come from
 there. D's complexity come for the most part from things 
 being completely
 unprincipled and lack of vision.
All useful computer languages are unprincipled and complex due to a number of factors:
I think this is a very dangerous assumption. And also not true.
Feel free to post a counterexample. All you need is one!
Perl 6.
Jul 16 2016
next sibling parent reply "H. S. Teoh via Digitalmars-d" <digitalmars-d puremagic.com> writes:
On Sun, Jul 17, 2016 at 02:59:42AM +0000, Nobody via Digitalmars-d wrote:
 On Saturday, 9 July 2016 at 00:14:34 UTC, Walter Bright wrote:
 On 7/8/2016 2:58 PM, Ola Fosheim Grøstad wrote:
 On Friday, 8 July 2016 at 21:24:04 UTC, Walter Bright wrote:
 On 7/7/2016 5:56 PM, deadalnix wrote:
 While this very true, it is clear that most D's complexity
 doesn't come from there. D's complexity come for the most part
 from things being completely unprincipled and lack of vision.
All useful computer languages are unprincipled and complex due to a number of factors:
I think this is a very dangerous assumption. And also not true.
Feel free to post a counterexample. All you need is one!
Perl 6.
Are you serious? Perl is the *prime* example of "unprincipled and complex". Larry Wall himself said (in print, no less): English is useful because it is a mess. Since English is a mess, it maps well onto the problem space, which is also a mess, which we call reality. Similarly, Perl was designed to be a mess, though in the nicest of all possible ways. -- Larry Wall T -- Being able to learn is a great learning; being able to unlearn is a greater learning.
Jul 16 2016
parent reply Bill Hicks <billhicks gmail.com> writes:
On Sunday, 17 July 2016 at 05:50:31 UTC, H. S. Teoh wrote:
 On Sun, Jul 17, 2016 at 02:59:42AM +0000, Nobody via 
 Digitalmars-d wrote:
 
 Perl 6.
Are you serious? Perl is the *prime* example of "unprincipled and complex". Larry Wall himself said (in print, no less): English is useful because it is a mess. Since English is a mess, it maps well onto the problem space, which is also a mess, which we call reality. Similarly, Perl was designed to be a mess, though in the nicest of all possible ways. -- Larry Wall T
1. Perl 6 is not Perl. 2. Perl 6 is better designed language than D will ever be. 3. Perl 6 is complex, but not complicated. I think people sometimes confuse the two. 4. D is a failed language, regardless of how people choose to categorize its attributes.
Jul 18 2016
parent Chris <wendlec tcd.ie> writes:
On Monday, 18 July 2016 at 11:05:34 UTC, Bill Hicks wrote:
 On Sunday, 17 July 2016 at 05:50:31 UTC, H. S. Teoh wrote:
 On Sun, Jul 17, 2016 at 02:59:42AM +0000, Nobody via 
 Digitalmars-d wrote:
 
 Perl 6.
Are you serious? Perl is the *prime* example of "unprincipled and complex". Larry Wall himself said (in print, no less): English is useful because it is a mess. Since English is a mess, it maps well onto the problem space, which is also a mess, which we call reality. Similarly, Perl was designed to be a mess, though in the nicest of all possible ways. -- Larry Wall T
1. Perl 6 is not Perl. 2. Perl 6 is better designed language than D will ever be. 3. Perl 6 is complex, but not complicated. I think people sometimes confuse the two. 4. D is a failed language, regardless of how people choose to categorize its attributes.
There are some interesting discussions about Perl 6[1][2]. They remind me of the discussions about D. Apart from some irrational points (the logo!), the fact that it took 15 years figures prominently - and people complain about its features that were so carefully designed. I don't know Perl 6 and cannot comment on the validity of that criticism. [1] http://blogs.perl.org/users/zoffix_znet/2016/01/why-in-the-world-would-anyone-use-perl-6.html [2] https://www.quora.com/Why-is-Perl-6-considered-to-be-a-disaster
Jul 18 2016
prev sibling parent Kagamin <spam here.lot> writes:
On Sunday, 17 July 2016 at 02:59:42 UTC, Nobody wrote:
 Perl 6.
Inequality operator :)
Jul 18 2016
prev sibling parent QAston <qaston gmail.com> writes:
On Friday, 8 July 2016 at 21:24:04 UTC, Walter Bright wrote:
 All useful computer languages are unprincipled and complex due 
 to a number of factors:

 1. the underlying computer is unprincipled and complex (well 
 known issues with integer and floating point arithmetic)

 2. what programmers perceive as logical and intuitive is often 
 neither logical nor intuitive to a computer (even Haskell has 
 wackadoodle features to cater to illogical programmers)

 3. what the language needs to do changes over time - the 
 programming world is hardly static

 4. new features tend to be added as adaptations of existing 
 features (much like how evolution works)

 5. new features have to be worked in without excessively 
 breaking legacy compatibility

 6. no language is conceived of as a whole and then implemented

 7. the language designers are idiots and make mistakes


 Of course, we try to minimize (7), but 1..6 are inevitable.
Letting consequences of those just happen is like a being a sailor and happily going the wrong way when the wind is not perfect. There are techniques both from the project management point of view and from language design point of view that can be applied to minimize and prevent those effects and aren't applied. So, some examples: About 6, 3 and 4. There's a thing called the MVP (minimal viable product) approach. You implement the minimally useful bare bones and leave as much wiggle room as possible for future changes. This doesn't need to be applied to the whole language, it can be applied per feature as well. Yes, you can't predict the future, that's not an excuse for not preparing for it and for possible change. When you're happy with the MVP you can accept it and do more experimentation with the knowledge you got from doing the MVP. See also: agile methodologies. About 2 and 7. Some positive changes happen here, still listing possible solutions won't hurt. Have more peer review. Release more often, so people won't be bothered if their feature won't make it (it's weird that Andrei worked at facebook, yet he doesn't push for faster, less feature heavy releases). Release new features under feature gates (like -dip25 and std.experimetal), don't stabilize until people provide feedback and are happy with the result (yes there's an issue with people not testing the feature because of small D adoption - don't include the feature for it's own sake if people can't be bothered to test it). We're all idiots - so we need to experiment before we commit to something. About 4 and 5. Those are partially combated by the MVP approach mentioned earlier- leaving as much as possible flexibility for as long as possible, so you can make decisions when you have the most information(i.e. as late as possible). Another way to combat this is to build language from independent parts that can be composed together. As an example of this done right: concrete code, templating mechanisms and conditional compilation are all independent parts of the language that can be put together, so the effective number of features is a cross product of those 3. Deadalnix gave some examples of that done wrong in this thread - as he's implementing a D compiler from scratch he can see the unorthogonal parts easily. SDC is probably worth looking into. With independent features it's much easier to keep the feature set adaptable. Another language design trick is to look at the whole language and try to hit as many birds as possible with as few (but not too few) stones as possible. There are numerous cases of problems being solved using different means each time a problem comes up. Look for a way to merge these categories of solutions into something coherent and deprecate the old mess. You don't have to drop the old ways, but new users will be glad to have a simpler model to learn, as old "special" solutions can be defined in terms of the new one. Do you really need pragmas, special variables and special constants when you could have nice namespaced intrinsics modules defined with existing and understood language rules? Do you need alias this when you could have opImplicitCast returning ref? I didn't just make those up, there are languages which do all of those and more and they work within the same domain as D.
Jul 09 2016