www.digitalmars.com         C & C++   DMDScript  

digitalmars.D - Clay language

reply bearophile <bearophileHUGS lycos.com> writes:
Through Reddit I have found a link to some information about the Clay language,
it wants to be (or it will be) a C++-class language, but it's not tied to C
syntax. It shares several semantic similarities with D too. It looks like a
cute language:
https://github.com/jckarter/clay/wiki/

Some small parts from the docs:

--------------------------

In Clay this:
https://github.com/jckarter/clay/wiki/Syntax-desugaring

static for (a in ...b)
    c;

is equivalent to:

{
    ref a = <first element of b>;
    c;
}
{
    ref a = <second element of b>;
    c;
}
/* ... */

I have an enhancement request about this for D:
http://d.puremagic.com/issues/show_bug.cgi?id=4085

--------------------

The part about Safer pointer type system is very similar to what I did ask for
D, and it looks similar to what Ada language does (for Clay this is just a
proposal, not implemented yet, but Ada is a rock-solid language):
https://github.com/jckarter/clay/wiki/Safer-pointer-type-system

--------------------

This is something that I want for D too, it's important:

Jonathan Shapiro (of BitC) makes an excellent argument that, in a systems
language, it is often undesirable to depend on the whims of an ill-specified
optimizer to convert abstract code into efficient machine code. The BitC
specification thus includes the idea of guaranteed optimizations, to allow code
to be written in a high-level style with predictably low or nonexistent runtime
cost (link). [...] Because Clay seeks to support systems programming with
high-level abstraction, certain patterns should be guaranteed to be optimized
in a certain way, instead of being left to the whims of LLVM or a C compiler.
Additional optimizations should not be prevented, however. [...] It should be
possible to specify that one or more of these optimizations is required, and
have the compiler raise an error when they cannot be applied for some reason.<
https://github.com/jckarter/clay/wiki/Guaranteed-optimizations Bye, bearophile
Dec 27 2010
next sibling parent reply Guilherme Vieira <n2.nitrogen gmail.com> writes:
On Mon, Dec 27, 2010 at 4:35 PM, bearophile <bearophileHUGS lycos.com>wrote:

 Through Reddit I have found a link to some information about the Clay
 language, it wants to be (or it will be) a C++-class language, but it's not
 tied to C syntax. It shares several semantic similarities with D too. It
 looks like a cute language:
 https://github.com/jckarter/clay/wiki/

 Some small parts from the docs:

 --------------------------

 In Clay this:
 https://github.com/jckarter/clay/wiki/Syntax-desugaring

 static for (a in ...b)
    c;

 is equivalent to:

 {
    ref a = <first element of b>;
    c;
 }
 {
    ref a = <second element of b>;
    c;
 }
 /* ... */

 I have an enhancement request about this for D:
 http://d.puremagic.com/issues/show_bug.cgi?id=4085

 --------------------

 The part about Safer pointer type system is very similar to what I did ask
 for D, and it looks similar to what Ada language does (for Clay this is just
 a proposal, not implemented yet, but Ada is a rock-solid language):
 https://github.com/jckarter/clay/wiki/Safer-pointer-type-system

 --------------------

 This is something that I want for D too, it's important:

Jonathan Shapiro (of BitC) makes an excellent argument that, in a systems
language, it is often undesirable to depend on the whims of an ill-specified optimizer to convert abstract code into efficient machine code. The BitC specification thus includes the idea of guaranteed optimizations, to allow code to be written in a high-level style with predictably low or nonexistent runtime cost (link). [...] Because Clay seeks to support systems programming with high-level abstraction, certain patterns should be guaranteed to be optimized in a certain way, instead of being left to the whims of LLVM or a C compiler. Additional optimizations should not be prevented, however. [...] It should be possible to specify that one or more of these optimizations is required, and have the compiler raise an error when they cannot be applied for some reason.< https://github.com/jckarter/clay/wiki/Guaranteed-optimizations Bye, bearophile
+1 for static for and guaranteed optimizations. Can we put it in the wishlist? -- Atenciosamente / Sincerely, Guilherme ("n2liquid") Vieira
Dec 27 2010
parent "Robert Jacques" <sandford jhu.edu> writes:
On Mon, 27 Dec 2010 13:42:50 -0700, Guilherme Vieira  
<n2.nitrogen gmail.com> wrote:

 On Mon, Dec 27, 2010 at 4:35 PM, bearophile  
 <bearophileHUGS lycos.com>wrote:

 Through Reddit I have found a link to some information about the Clay
 language, it wants to be (or it will be) a C++-class language, but it's  
 not
 tied to C syntax. It shares several semantic similarities with D too. It
 looks like a cute language:
 https://github.com/jckarter/clay/wiki/

 Some small parts from the docs:

 --------------------------

 In Clay this:
 https://github.com/jckarter/clay/wiki/Syntax-desugaring

 static for (a in ...b)
    c;

 is equivalent to:

 {
    ref a = <first element of b>;
    c;
 }
 {
    ref a = <second element of b>;
    c;
 }
 /* ... */

 I have an enhancement request about this for D:
 http://d.puremagic.com/issues/show_bug.cgi?id=4085

 --------------------

 The part about Safer pointer type system is very similar to what I did  
 ask
 for D, and it looks similar to what Ada language does (for Clay this is  
 just
 a proposal, not implemented yet, but Ada is a rock-solid language):
 https://github.com/jckarter/clay/wiki/Safer-pointer-type-system

 --------------------

 This is something that I want for D too, it's important:

Jonathan Shapiro (of BitC) makes an excellent argument that, in a  
systems language, it is often undesirable to depend on the whims of an ill-specified optimizer to convert abstract code into efficient machine code. The BitC specification thus includes the idea of guaranteed optimizations, to allow code to be written in a high-level style with predictably low or nonexistent runtime cost (link). [...] Because Clay seeks to support systems programming with high-level abstraction, certain patterns should be guaranteed to be optimized in a certain way, instead of being left to the whims of LLVM or a C compiler. Additional optimizations should not be prevented, however. [...] It should be possible to specify that one or more of these optimizations is required, and have the compiler raise an error when they cannot be applied for some reason.< https://github.com/jckarter/clay/wiki/Guaranteed-optimizations Bye, bearophile
+1 for static for and guaranteed optimizations. Can we put it in the wishlist?
If you followed the bug report, you'd find D already has a way of doing static foreach.
Dec 27 2010
prev sibling next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
bearophile wrote:
 This is something that I want for D too, it's important:
 
 Jonathan Shapiro (of BitC) makes an excellent argument that, in a systems
 language, it is often undesirable to depend on the whims of an
 ill-specified optimizer to convert abstract code into efficient machine
 code. The BitC specification thus includes the idea of guaranteed
 optimizations, to allow code to be written in a high-level style with
 predictably low or nonexistent runtime cost (link). [...] Because Clay
 seeks to support systems programming with high-level abstraction, certain
 patterns should be guaranteed to be optimized in a certain way, instead of
 being left to the whims of LLVM or a C compiler. 
 [...]
 It should be possible to specify
 that one or more of these optimizations is required, and have the compiler
 raise an error when they cannot be applied for some reason.<
Frankly, this is meaningless. For example, while C does not specify what optimizations will be done, in practice you'd be very hard pressed to find one that didn't do: 1. replacing a*4 with a<<2 2. doing decent register allocation, obviating the role of the "register" keyword 3. dead assignment elimination 4. named return value optimization 5. etc.
 Additional optimizations should not be prevented, however.
I.e. the same situation as today.
Dec 27 2010
parent reply bearophile <bearophileHUGS lycos.com> writes:
Walter:

 Additional optimizations should not be prevented, however.
 I.e. the same situation as today.
It's a different situation. See this part:
 It should be possible to specify
 that one or more of these optimizations is required, and have the compiler
 raise an error when they cannot be applied for some reason.<
The idea is to allow the code (I presume with the help of some annotations) to require certain optimizations (some are listed here, and they are more complex than replacing a*4 with a<<2:https://github.com/jckarter/clay/wiki/Guaranteed-optimizations ), if the compiler doesn't perform a specific optimization (that I presume an annotation asks for), a compilation error is raised, and the programmer is forced to change the code, removing the annotation or doing something else. This also means that a conforming Clay compiler must perform a certain number of not simple optimizations (further optimizations are accepted. So the point is for a specific piece of code to have a common level of optimization). I don't know if all this is possible (this meta-feature is in the wish list of Clay), but if it's doable then it sounds like a very good thing to add to D3. Bye, bearophile
Dec 27 2010
next sibling parent Walter Bright <newshound2 digitalmars.com> writes:
bearophile wrote:
 The idea is to allow the code (I presume with the help of some annotations)
 to require certain optimizations (some are listed here, and they are more
 complex than replacing a*4 with
 a<<2:https://github.com/jckarter/clay/wiki/Guaranteed-optimizations ), if the
 compiler doesn't perform a specific optimization (that I presume an
 annotation asks for), a compilation error is raised, and the programmer is
 forced to change the code, removing the annotation or doing something else.
 This also means that a conforming Clay compiler must perform a certain number
 of not simple optimizations (further optimizations are accepted. So the point
 is for a specific piece of code to have a common level of optimization).
Like I said, this is meaningless wankery. There's a minimum bar of acceptability of the generated code for every widely used compiler, specified or not. Do you really want to annotate code with which optimizations should be performed? Frankly, you and everyone else will get it wrong. Look at the register keyword in C - every non-trivial piece of code I've seen that tried to carefully annotate declarations with "register" got it wrong, which is why every C/C++ compiler today ignores the "register" keyword, and now the "inline" keyword is ignored as well. Let the compiler implementors do their job. If they don't, file bug reports, use another compiler, etc. You don't specify how UPS gets a package to your door, if you tried you'd get it wrong; you simply expect them to do it.
Dec 27 2010
prev sibling parent reply Stanislav Blinov <stanislav.blinov gmail.com> writes:
On 12/28/2010 12:48 AM, bearophile wrote:
 Walter:

 Additional optimizations should not be prevented, however.
 I.e. the same situation as today.
It's a different situation. See this part:
 It should be possible to specify
 that one or more of these optimizations is required, and have the compiler
 raise an error when they cannot be applied for some reason.<
The idea is to allow the code (I presume with the help of some annotations) to require certain optimizations (some are listed here, and they are more complex than replacing a*4 with a<<2:https://github.com/jckarter/clay/wiki/Guaranteed-optimizations ), if the compiler doesn't perform a specific optimization (that I presume an annotation asks for), a compilation error is raised, and the programmer is forced to change the code, removing the annotation or doing something else. This also means that a conforming Clay compiler must perform a certain number of not simple optimizations (further optimizations are accepted. So the point is for a specific piece of code to have a common level of optimization). I don't know if all this is possible (this meta-feature is in the wish list of Clay), but if it's doable then it sounds like a very good thing to add to D3.
Frankly, I'd warmly back Walter on this one. "Requiring" optimizations is similar to asking a compiler to do something it's not supposed to. In the end, optimization is a means compiler uses to make generated code more "efficient", which is a subjective term and can only be approximated by e.g. instruction count per operation, memory request rate, overall code size, etc. But it's not a "conuer-the-world" means. On one hand, demanding a certain optimization means that programmer knows it's possible (e.g, programmer can do it herself with, e.g., inline asm), but on the other hand it must mean that it's applicable to used language instructions (but language or compiler may not be smart enough to figure out "optimizable" spot). I can't think of a way to blame the compiler if it can't optimize code the way you want (and blaming in this context means forcing error message instead of using another code generation path) - this, in effect, is like asking compiler to reject perfectly valid code. It also "narrows" (is this a verb in English?) selection of compilers for your code (because optimization is at the very least a thing a compiler may or may not have). I think that what *would* be useful is *disallowing* optimizations for certain parts of code. I mean, (if we talk about dmd or many C/C++ compilers), with -O flag, the compiler is given carte blanche on code generation, which may or may not lead to certain "unexpected" differences between unoptimized and optimized code. For example, several times I ran across nasty spots with optimized-out temporaries in C++ (think of relying on constructor calls). I can't remember the exact cases (shame on me, I haven't thought about storing a test-case database at that point), but I do remember well the frustration. This disallowing is not so easy as it seems (not to talk about requiring), because optimizing compiler can employ certain assumptions on overall code generation strategy, which may not fit well this "external control". But overall, asking *not* to do certain things is a lot more liberal than asking to *do the things in the exact way*.
Dec 27 2010
parent reply Walter Bright <newshound2 digitalmars.com> writes:
Stanislav Blinov wrote:
 Frankly, I'd warmly back Walter on this one. "Requiring" optimizations 
 is similar to asking a compiler to do something it's not supposed to. In 
 the end, optimization is a means compiler uses to make generated code 
 more "efficient", which is a subjective term and can only be 
 approximated by e.g. instruction count per operation, memory request 
 rate, overall code size, etc. But it's not a "conuer-the-world" means. 
 On one hand, demanding a certain optimization means that programmer 
 knows it's possible (e.g, programmer can do it herself with, e.g., 
 inline asm), but on the other hand it must mean that it's applicable to 
 used language instructions (but language or compiler may not be smart 
 enough to figure out "optimizable" spot).
Another problem with specified optimizations is that computer architectures change, and the right optimizations to use can change dramatically (and has).
 I think that what *would* be useful is *disallowing* optimizations for 
 certain parts of code. I mean, (if we talk about dmd or many C/C++ 
 compilers), with -O flag, the compiler is given carte blanche on code 
 generation, which may or may not lead to certain "unexpected" 
 differences between unoptimized and optimized code. For example, several 
 times I ran across nasty spots with optimized-out temporaries in C++ 
 (think of relying on constructor calls). I can't remember the exact 
 cases (shame on me, I haven't thought about storing a test-case database 
 at that point), but I do remember well the frustration.
 This disallowing is not so easy as it seems (not to talk about 
 requiring), because optimizing compiler can employ certain assumptions 
 on overall code generation strategy, which may not fit well this 
 "external control". But overall, asking *not* to do certain things is a 
 lot more liberal than asking to *do the things in the exact way*.
This is the issue with "implementation defined" behavior. D tries to minimize that.
Dec 27 2010
parent Stanislav Blinov <stanislav.blinov gmail.com> writes:
On 12/28/2010 03:49 AM, Walter Bright wrote:
 Stanislav Blinov wrote:
 Frankly, I'd warmly back Walter on this one. "Requiring" optimizations
 is similar to asking a compiler to do something it's not supposed to...
Another problem with specified optimizations is that computer architectures change, and the right optimizations to use can change dramatically (and has).
 I think that what *would* be useful is *disallowing* optimizations for
 certain parts of code...
This is the issue with "implementation defined" behavior. D tries to minimize that.
Damn. What defines a distance between knowledge and wisdom is often the ability to say everything with one word (well, at least one sentence) :)
Dec 27 2010
prev sibling next sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 12/27/10 12:35 PM, bearophile wrote:
 Through Reddit I have found a link to some information about the Clay
language, it wants to be (or it will be) a C++-class language, but it's not
tied to C syntax. It shares several semantic similarities with D too. It looks
like a cute language:
 https://github.com/jckarter/clay/wiki/
[snip] FWIW I just posted a response to a question asking for a comparison between Clay and D2. http://www.reddit.com/r/programming/comments/es2jx/clay_programming_language_wiki/ Andrei
Dec 27 2010
next sibling parent reply bearophile <bearophileHUGS lycos.com> writes:
Andrei:

 FWIW I just posted a response to a question asking for a comparison 
 between Clay and D2.
 
 http://www.reddit.com/r/programming/comments/es2jx/clay_programming_language_wiki/
Just few comments:
 The docs offer very little on Clay's module system (which is rock solid in D2).
D2 module system may be fixed, but currently it's not even bread-solid. The Clay syntax for imports is more similar to what I have desired for D (but that () syntax is not so good): import foo.bar; // Imports module foo.bar as a qualified path // use "foo.bar.bas" to access foo.bar member bas import foo.bar as bar; // Imports module foo.bar with alias bar // use "bar.bas" to access foo.bar member bas import foo.bar.(bas); // Imports member bas from module foo.bar // use "bas" to access foo.bar member bas import foo.bar.(bas as fooBarBas) // Imports member bas with alias fooBarBas import foo.bar.*; // Imports all members from module foo.bar I don't know about Modula3 module system, I will search info about it.
Clay mentions multiple dispatch as a major feature. Based on extensive
experience in the topic I believe that that's a waste of time. Modern C++
Design has an extensive chapter on multiple dispatch, and I can vouch next to
nobody uses it in the real world. Sure, it's nice to have, but its actual
applicability is limited to shape collision testing and a few toy examples.<
I think double dispatch is enough, it cover most cases and keeps both compiler complexity low enough. If you put double dispatch with a nice syntax in D then maybe people will use it. There are many things that people in other languages use that C++ programmers don't use because using it in C++ is ugly, a pain, unsafe, etc. The visitor pattern is used enough in Java (Scala too was designed to solve this problem). Bye, bearophile
Dec 27 2010
next sibling parent reply Adam D. Ruppe <destructionator gmail.com> writes:
bearophile wrote:

 The Clay syntax for imports is more similar to what I have desired for D:
That looks virtually identical to D, with the sole exception of "import module" there means "static import module" in D and "import module.*" is "import module". But the functionality is identical. I've seen you bring up this import thing over and over again, and every time someone points out D already does what you ask for and has since forever ago, but you always bring it up again. I don't understand what your problem is with this. Is your only complaint that import all is the default? If yes, why did you show 3 irrelevant things there instead of just the two that directly reflect what you are trying to say? By the way, I find D's system to be perfect. It Just Works in most situations, and fails to compile meaningfully in the rest, trivially fixed by putting the absolute module path where it is needed and nowhere else. This is rock solid today. (I assume when you said bread solid you were referring to package protection and stuff like that. But that's separate from the import statement, so again, why not say it directly instead of mixing it with so much irrelevant stuff?) That's the ideal situation. Writing out qualified paths by default is just awful, and I can't understand why you keep asking for it.
Dec 27 2010
next sibling parent reply bearophile <bearophileHUGS lycos.com> writes:
Adam D. Ruppe:

 That's the ideal situation. Writing out qualified paths by default
 is just awful, and I can't understand why you keep asking for it.
Importing all names from a module on default is against modularity. While you read a module code you don't know where the imported names it uses come from. Take also a look at how Ada specifies interfaces across modules. A language needs to be designed with a balance between low verbosity and safety, here in my opinion D has chosen too much for the low verbosity (as Walter reminds D's anti-hijacking support in imports is a nice idea, but I think it's not enough). (Reading about safer language subsets like SPARK and MISRA I have learnt that having a language safe on default is better (because this safety spreads in the whole ecosystem of the language users, and not just in a small subset of it), but where that's not possible, then sometimes an acceptable replacement is to have a language design that allows an external tool to statically disallow an unsafe coding style or feature (in many cases this is not possible for C, or it's possible but leads to a very limited language). In this case for an external tool it's easy to statically disallow not-static D imports.) Bye, bearophile
Dec 28 2010
next sibling parent reply Stanislav Blinov <stanislav.blinov gmail.com> writes:
On 12/28/2010 11:19 AM, bearophile wrote:
 Adam D. Ruppe:

 That's the ideal situation. Writing out qualified paths by default
 is just awful, and I can't understand why you keep asking for it.
Importing all names from a module on default is against modularity. While you read a module code you don't know where the imported names it uses come from. Take also a look at how Ada specifies interfaces across modules. A language needs to be designed with a balance between low verbosity and safety, here in my opinion D has chosen too much for the low verbosity (as Walter reminds D's anti-hijacking support in imports is a nice idea, but I think it's not enough). (Reading about safer language subsets like SPARK and MISRA I have learnt that having a language safe on default is better (because this safety spreads in the whole ecosystem of the language users, and not just in a small subset of it), but where that's not possible, then sometimes an acceptable replacement is to have a language design that allows an external tool to statically disallow an unsafe coding style or feature (in many cases this is not possible for C, or it's possible but leads to a very limited language). In this case for an external tool it's easy to statically disallow not-static D imports.) Bye, bearophile
Taking an example from std.algorithm documentation: 1) int[] arr1 = [ 1, 2, 3, 4 ]; int[] arr2 = [ 5, 6 ]; auto squares = map!("a * a")(chain(arr1, arr2)); assert(equal(squares, [ 1, 4, 9, 16, 25, 36 ])); // 146 characters 2) int[] arr1 = [ 1, 2, 3, 4 ]; int[] arr2 = [ 5, 6 ]; auto squares = std.algorithm.map!("a * a")(std.range.chain(arr1, arr2)); assert(std.algorithm.equal(squares, [ 1, 4, 9, 16, 25, 36 ])); // 184 characters How is 2) is better/safer than 1)? I took a pretty short example, but I easily imagine as real code would blow up to 25-40% more characters just for the sake of explicit qualification, especially when using third-party (or own) libraries with nested structure (like, e.g., in Tango). And nothing is actually gained with this, except that it is "clear" from what modules those names come. In C++, you sometimes have to fully qualify names "just in case" to prevent hijacking. But D is hijacking-proof at compile time, so there is no need in those "just in case" qualifiers, the compiler will tell you when the "case" is here. And what would you propose to do with uniform function call syntax if "import all" is not the default? Even considering that currently it works only for arrays, would you replace every use arr.front, arr.back, arr.empty with std.array.front(arr), std.array.back(arr) and std.array.empty(arr) in favor of "import safety"? Or would you instead put a long "disqualifying" import line at the top of every module that uses std.array? My resume:
 Importing all names from a module on default is against modularity.
It is not. It does not break modularity, it simplifies module interfacing, it reduces amount of code (and is therefore DRY-friendly).
 If in your module you don't want to use qualified paths you use:
import foo: bar, spam; ...which quickly expands to a lot of *long* import lines, with "don't forget to add another" constantly pushing the door-bell.
Dec 28 2010
next sibling parent reply bearophile <bearophileHUGS lycps.com> writes:
Stanislav Blinov:

 ...which quickly expands to a lot of *long* import lines, with "don't 
 forget to add another" constantly pushing the door-bell.
That's redundancy is "making your subsystems interfaces explicit". I see that you and other people here fail to understand this, or just value a bit of typing more than a tidy way of coding. To interested people I suggest to read papers about this part of the Ada design (and generally about module system design). Bye, bearophile
Dec 28 2010
next sibling parent Stanislav Blinov <stanislav.blinov gmail.com> writes:
On 12/28/2010 03:26 PM, bearophile wrote:
 Stanislav Blinov:

 ...which quickly expands to a lot of *long* import lines, with "don't
 forget to add another" constantly pushing the door-bell.
That's redundancy is "making your subsystems interfaces explicit". I see that you and other people here fail to understand this, or just value a bit of typing more than a tidy way of coding. To interested people I suggest to read papers about this part of the Ada design (and generally about module system design).
What I do fail to understand is what should these explicit interfaces bring *into D*. Until now, you didn't demonstrate anything they would give that D doesn't already have. BTW, I forgot to mention one more point: explicit qualification has significant impact on generic code: void foo(T)(T t) if (__traits(compiles, bar(t)) { bar(t); } How would that be scalable to new types if bar would have to be explicitly qualified? void foo(T)(T t) if (__traits(compiles, pkg.modbar.bar(t)) { pkg.modbar.bar(t); } Now, to extend the range of types accepted by foo(), you *have to* change pkg.modbar module to introduce bar() for new types, which isn't always possible or desirable.
Dec 28 2010
prev sibling parent Don <nospam nospam.com> writes:
bearophile wrote:
 Stanislav Blinov:
 
 ...which quickly expands to a lot of *long* import lines, with "don't 
 forget to add another" constantly pushing the door-bell.
That's redundancy is "making your subsystems interfaces explicit". I see that you and other people here fail to understand this, or just value a bit of typing more than a tidy way of coding. To interested people I suggest to read papers about this part of the Ada design (and generally about module system design). Bye, bearophile
Bearophile, you frequently talk about "a tidy way of coding" in arguments. As far as I can tell, it simply means "bearophile's way of coding", and it's always verbose, but the 'tidy' contains an implied value judgement that it is good. Your use of 'tidy coding' doesn't distinguish between a coding style which catches bugs, versus a coding style which contains needless, redundant verbosity -- and that is inevitably what the discussion is about. I do not think it is ever valid to use that expression. To me, that term is always a flag that a fallacy is coming.
Dec 28 2010
prev sibling parent reply spir <denis.spir gmail.com> writes:
On Tue, 28 Dec 2010 14:38:56 +0300
Stanislav Blinov <stanislav.blinov gmail.com> wrote:

 Taking an example from std.algorithm documentation:
=20
 1)
 int[] arr1 =3D [ 1, 2, 3, 4 ];
 int[] arr2 =3D [ 5, 6 ];
 auto squares =3D map!("a * a")(chain(arr1, arr2));
 assert(equal(squares, [ 1, 4, 9, 16, 25, 36 ]));
 // 146 characters
=20
 2)
 int[] arr1 =3D [ 1, 2, 3, 4 ];
 int[] arr2 =3D [ 5, 6 ];
 auto squares =3D std.algorithm.map!("a * a")(std.range.chain(arr1, arr2));
 assert(std.algorithm.equal(squares, [ 1, 4, 9, 16, 25, 36 ]));
 // 184 characters
=20
 How is 2) is better/safer than 1)? I took a pretty short example, but I=20
 easily imagine as real code would blow up to 25-40% more characters just=
=20
 for the sake of explicit qualification, especially when using=20
 third-party (or own) libraries with nested structure (like, e.g., in=20
 Tango).
It seems you don't get the point (or rather what I think is the actual poin= t). "Safe" import (as the OP defines and names it; I would rather call it "= explicite import") perfectly allows aliasing imported symbols, as D present= ly does it: import mod : func =3D somefunc; // or simply func=3Dfunc But the current default / least resistance way of importing blindly imports= all symbols from the module in a local namespace and --even more important= ly for me: 1. at import place, does not tell the reader which symbols are actually used 2. at use place, does not tell the reader where a given symbol is imported = from These are imo essential documentation needs unfulfilled. (I'm constantly bu= mping in this wall when reading other people's code; esp. phobos source; ev= en more when it uses C stdlib funcs that everybody but me seems to know whe= re they are defined, what they actually do, and how ;-) Keeping the module name instead of aliasing in numerous cases makes the cod= e far easier to read. For instance, I commonly use this scheme: static import file =3D std.file; ... auto config =3D file.readText("config"); There should be a way to carelessly import everything --for instance into a= parser module using (most) pattern types, constants, and more, from a pars= ing lib. This is correct in particular cases, and secure thank to the fact = that in D symbols are not implicitely re-exported. But this should not be t= he default case; then the plain fact of using a dedicated syntax (eg like p= roposed "import parseLib : *;") would deal as a kind of documentation about= a non-standard practice. Maybe it's only me (and a few other martian programmers) expecting code to = be commonly clearer... Denis -- -- -- -- -- -- -- vit esse estrany =E2=98=A3 spir.wikidot.com
Dec 29 2010
parent Adam Ruppe <destructionator gmail.com> writes:
spir wrote:
 These are imo essential documentation needs unfulfilled.
 1. at import place, does not tell the reader which symbols
 are actually used
Two things: a) Why does it really matter at all? The specific functions won't change the fact that you depend on the module. b) Even if you grant that, what are the odds that such documentation would stay up to date as you remove the use of functions?
 2. at use place, does not tell the reader where a given symbol
is imported from That introduces an unnecessary dependency on module implementation details (unnecessary in the strictest sense - it compiles fine without it!) If you want to determine that, you can consult the documentation or the compiler.
  For instance, I commonly use this scheme: [snip std.file]
I do that too in a lot of cases, where the module name is meaningful and not redundant. Same for things like std.base64.encode, though the new release makes the module namespace redundant... Anyway, in some cases it gives good information. But what about std.variant.Variant? Or std.json.toJSON? You're just repeating yourself, and if you want to know where it is from, you can check the imports and/or the documentation to easily confirm it: http://dpldocs.info/Variant Listing the name over and over again, and having to seek and change it over and over again when other modules make a small change, just adds a lot of work for no benefit. Besides, if you want docs on the item, you have to look it up anyway.
 Maybe it's only me (and a few other martian programmers) expecting > code to be
commonly clearer... You're assuming your way is, in fact, objectively clearer.
Dec 29 2010
prev sibling next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
bearophile wrote:
 (Reading about safer language subsets like SPARK and MISRA I have learnt that
 having a language safe on default is better
Reading about a language is not good enough to make such decisions. You need to have considerable experience with them. I've seen a lot of claims about languages that simply do not pan out. Exception specifications in Java is a prime example - it took years of experience to discover that was a really bad idea. It had the opposite effect from what was intended (and touted).
 Take also a look at how Ada specifies interfaces across modules.
Ada is a failed language.
 A language needs to be designed with a balance between low verbosity and 
safety, here in my opinion D has chosen too much for the low verbosity (as Walter reminds D's anti-hijacking support in imports is a nice idea, but I think it's not enough). D's anti-hijacking nails it. You get the benefits of low verbosity, and it's perfectly safe. It's a lot more than just a "nice idea".
Dec 28 2010
parent bearophile <bearophileHUGS lycps.com> writes:
Stanislav Blinov:
 ...which quickly expands to a lot of *long* import lines, with "don't
 forget to add another" constantly pushing the door-bell.
Walter:
 Reading about a language is not good enough to make such decisions. You need
to 
 have considerable experience with them. I've seen a lot of claims about 
 languages that simply do not pan out. Exception specifications in Java is a 
 prime example - it took years of experience to discover that was a really bad 
 idea. It had the opposite effect from what was intended (and touted).
You are both right. I have to use those languages (like Ada) for something more than tiny programs, and then maybe I will be able to bring here a bit of experimental evidence :-) Sorry for the noise and for using some of your time. Bye, bearophile
Dec 28 2010
prev sibling next sibling parent reply Adam D. Ruppe <destructionator gmail.com> writes:
 While you read a module code you don't know where the imported
 names it uses come from.
That's a feature. Let me give an example. I recently wrote an OAuth implementation for my web app. I started off by writing it all in one module, the one where it was used. That was ok until I wanted to use it in a different module, so I cut and pasted it to a different file, the shared util.d. Compile, both modules now work. No need to edit the user code at all. Next up I moved it from that grab bag of project specific utilities to a generic oauth.d module, refactoring it to be generic. Add "import oauth;" and compile. All modules work without editing their code. Later, I might choose to rename it to something more generic. If I added another web mechanism, I might call it webauth.d or something like that. Aside from the import lines, the rest of the user code would confidently remain unchanged. That's the advantage a good import system like D has. You can make changes with ease and confidence knowing it isn't going to break anything anywhere else. That's really the goal of modularity: making it so a change in one place won't break code anywhere else. Using fully qualified names works /against/ that. (Note though that sometimes fully qualified names do make things more clear or show something special about that block of code. They have their uses and it makes me very happy D supports it and has static imports. But their uses are more specialized: being the default would be a net negative, for the reasons above.)
Dec 28 2010
parent reply spir <denis.spir gmail.com> writes:
On Tue, 28 Dec 2010 16:01:24 +0000 (UTC)
"Adam D. Ruppe" <destructionator gmail.com> wrote:

 While you read a module code you don't know where the imported names it=
uses come from. =20
 That's a feature.
 [...]
 I started off by writing it all in one module, the one where it
 was used. That was ok until I wanted to use it in a different module,
 so I cut and pasted it to a different file, the shared util.d.
=20
 Compile, both modules now work. No need to edit the user code at all.
=20
 Next up I moved it from that grab bag of project specific utilities
 to a generic oauth.d module, refactoring it to be generic.
=20
 Add "import oauth;" and compile. All modules work without editing their c=
ode. But that's true for explicite imports as well: import util : x1,x2,x3; --> import oauth : x1,x2,x3; What's the point? Or what do I miss? (Name qualification in code, I mean at use place, is best used for std or t= hird-party lib, I guess. When one uses their own utils, it's generally not = useful; listing used symbols at import place is enough for reader documenta= tion.) denis -- -- -- -- -- -- -- vit esse estrany =E2=98=A3 spir.wikidot.com
Dec 29 2010
parent reply Adam Ruppe <destructionator gmail.com> writes:
spir wrote:
 But that's true for explicite imports as well:
  import util : x1,x2,x3;
 -->
  import oauth : x1,x2,x3;
Yeah, that's not too bad, since the changes are still just in import lines. Though like I said in my last post, I don't see much of a benefit here either.
 (Name qualification in code, I mean at use place, is best used for
 std or third-party lib, I guess.
I don't see a difference between the two. If the source module matters, it matters if it is third party or not. If it doesn't matter, well it just doesn't matter!
Dec 29 2010
next sibling parent spir <denis.spir gmail.com> writes:
On Wed, 29 Dec 2010 14:40:31 +0000 (UTC)
Adam Ruppe <destructionator gmail.com> wrote:

 (Name qualification in code, I mean at use place, is best used for
 std or third-party lib, I guess. =20
=20 I don't see a difference between the two. If the source module matters, it matters if it is third party or not. If it doesn't matter, well it just doesn't matter!
Your argumentation seems to mix two (related but distinct) features: * Explicite import at import place. import std.stdio : writeln; * Name qualification at use place std.stdio.writeln("foo"); (When you use the first one, the second one is facultative, indeed.) I am argumentating for the first feature but your replies (and replies from= others) often seem to imply I was preeching (which I don't, i'm not fan of= Java at all). I think we both agree the qualification idiom is a good thin= g when it helps clarity, meaning names make sense. It is wrong for me to make implicite import-all the default scheme. I canno= t imagine how/why you do not see the huge documentation benefit of listing = your toolkit at the top a module. Also, I do not understand how I'm suppose= d to guess where a func comes from if nothing in the importing module tells= it to me. (Surely I sadly miss the seventh divination sense of talented pr= ogrammers.) About your reply above, when using your own toolkit or domain-specific modu= le, semantic or conceptual relations with what your code is doing usually a= re more obvious: import myParsingLib : Literal, Klass, Choice, Sequence; // expression parser auto dot =3D new Literal("."); auto digit =3D new Klass("0-9"); ... Qualification would often be kind of over-specification. This is different from using general-purpose libs like for handling files: = here, proper qualifiers are often a true semantic hint on what an instructi= on copes with. Also names of funcs, types, constants, are often very simila= r (if not identical) in various general-purpose libraries. (Think at write*= , read*,...) Denis -- -- -- -- -- -- -- vit esse estrany =E2=98=A3 spir.wikidot.com
Dec 29 2010
prev sibling next sibling parent reply Andrej Mitrovic <andrej.mitrovich gmail.com> writes:
On 12/29/10, spir <denis.spir gmail.com> wrote:
 It is wrong for me to make implicite import-all the default scheme. I cannot
 imagine how/why you do not see the huge documentation benefit of listing
 your toolkit at the top a module. Also, I do not understand how I'm supposed
 to guess where a func comes from if nothing in the importing module tells it
 to me.
Like Walter said, D offers measures against function hijacking and that was the biggest problem with implicit import-all. D still offers features like static import and selective import, so if someone wants to take advantage of those to make the code more readable, they *can*. In the end it's up to the programmer to choose. D has many options to keep everyone happy. Personally, I start writing some tryout code by importing modules as normal: import std.range; import std.algorithm; { ... test some code.. } And then when I've figured out a good set of symbols that I need I add a colon and list them: import std.range : retro; import std.algorithm : reduce, reverse; But I really don't see the benefit of changing the semantics of import. You won't get shot in the foot since D offers good protection from function hijacking. But even if you did change the semantics of import to a static import, you still wouldn't fix the *programmers* themselves. Everyone will simply start using /import module.*/, or /import module.all/ instead of using the safe default /import module/ which would be a static import. No defined default or convention in the community can force programmers to code in one way. And if you doubt that, just take a look at all the Python code that's out there. A ton of programmers still use the star syntax to import every symbol into the current scope, even though its frowned upon by the Python community (and afaik you can't use it anymore in Python 3). But D has counter-measures which prevent hijacking, and hence it's pretty safe to simply import the whole module without worrying too much. As for figuring out where each symbol comes from, if it's your own codebase you'll probably use the colon syntax if you're really having problems figuring out where the symbols are coming from. Otherwise in larger codebases you'll more than likely use some capable IDE or a plugin that knows where each symbol comes from and lists them for you. Otherwise I don't know what this discussion is about anymore. :)
Dec 29 2010
parent bearophile <bearophileHUGS lycos.com> writes:
Andrej Mitrovic:

 But even if you did change the semantics of import to a static import,
 you still wouldn't fix the *programmers* themselves.
This can't justify a worse design for the language. And the syntax (and common idioms) of a language have some influence on the way programmers write code. If you give them a new default, some of them will use the new default instead of idioms from other languages they already know.
 And if you doubt that, just take a look at all the Python code that's
 out there. A ton of programmers still use the star syntax to import
 every symbol into the current scope,
If I go in the Python Package Index PyPI, I don't see many from foo import *. They are common only when you use code in the shell, or when you write 20 lines long scripts that use good modules. But in those good modules you will not find the import * often. I guess compared to C programmers the Python community is more willing to follow a common good coding style :-) Bye, bearophile
Dec 29 2010
prev sibling parent spir <denis.spir gmail.com> writes:
On Wed, 29 Dec 2010 19:24:12 +0100
Andrej Mitrovic <andrej.mitrovich gmail.com> wrote:

 But I really don't see the benefit of changing the semantics of
 import. You won't get shot in the foot since D offers good protection
 from function hijacking.
Yes, I'm aware of this. But this (great) feature is not related to the disc= ussion, I guess.
 But even if you did change the semantics of import to a static import,
 you still wouldn't fix the *programmers* themselves. Everyone will
 simply start using /import module.*/, or /import module.all/ instead
 of using the safe default /import module/ which would be a static
 import. No defined default or convention in the community can force
 programmers to code in one way.
=20
 And if you doubt that, just take a look at all the Python code that's
 out there. A ton of programmers still use the star syntax to import
 every symbol into the current scope, even though its frowned upon by
 the Python community (and afaik you can't use it anymore in Python 3).
 But D has counter-measures which prevent hijacking, and hence it's
 pretty safe to simply import the whole module without worrying too
 much.
We certainly don't read code by the same programmers ;-) I have never seen = "from x import *" except in newcomer code and case like evoked in a previou= s post, namely parsing: a parser-definition module using eg "from pyparsing= import *". A counter measure is to define what is to be exported. (A very nice feature= of Lua: th module's return {whatever you want to export}). Similarly in py= thon you get __all__ (some proposed to allow import * only id the module de= fines __all__, sensible indeed). In D you have to define everything _else_ in the module private: not only i= t's not the default, it's imo reversing the logic. [Now, i'll stop arguing ;-)] Denis -- -- -- -- -- -- -- vit esse estrany =E2=98=A3 spir.wikidot.com
Dec 29 2010
prev sibling parent reply Adam D. Ruppe <destructionator gmail.com> writes:
Let me tell a horror story about what is definitely bad modularity:
something that happened in the old PHP app.

(I might pick on PHP a lot, but that's just because it is the
language I have the most professional experience in so I've seen
a lot of its horrors first hand. That, and it is a godawful language!)

So I had to add a function to one of the project's includes to enable
new functionality. I didn't even get very far into it before a
mysterious error came up across the site:

PHP Fatal error: Cannot redeclare bar_foo() (previously declared
in my file) in completely unrelated file on line x.


WTF. I coincidentally picked the same name for my function
as my predecessor did for another function a long time ago, in
a file far, far away! And it broke things scattered throughout
the program. C does this too, but at least there's a compile
step to save you.

The sad part: that's the best case scenario with PHP! If it hadn't
broken the entire site, I might not have caught it so early and
then there'd be a real mess on the one page that just happened
to include both files at once, hidden, lurking...

But hell, at least if you coincidentally pick the same name as
your predecessor it doesn't *overwrites* one of your functions at
runtime like some languages! I kid you not, some popular languages
actually do that. It blows my mind.



Anyway, that's bad modularity and a couple of bug prone behaviors
coming from it. Using a strict naming convention with long, fully
qualified names (like familyname_functionname convention) might
be a good thing in those languages, to reduce the odds that two
team members pick the same name and break code elsewhere in the app.

In D, however, such things are not necessary. The compiler makes
it Just Work or fail early and fail meaningfully in the exact
place where there actually is a problem and nowhere else. Such
problems cannot occur, so the logic that goes into protecting
yourself in other languages don't necessarily apply to D.
Dec 28 2010
parent reply sybrandy <sybrandy gmail.com> writes:
On 12/28/2010 11:49 AM, Adam D. Ruppe wrote:
 Let me tell a horror story about what is definitely bad modularity:
 something that happened in the old PHP app.

 (I might pick on PHP a lot, but that's just because it is the
 language I have the most professional experience in so I've seen
 a lot of its horrors first hand. That, and it is a godawful language!)

 So I had to add a function to one of the project's includes to enable
 new functionality. I didn't even get very far into it before a
 mysterious error came up across the site:

 PHP Fatal error: Cannot redeclare bar_foo() (previously declared
 in my file) in completely unrelated file on line x.


 WTF. I coincidentally picked the same name for my function
 as my predecessor did for another function a long time ago, in
 a file far, far away! And it broke things scattered throughout
 the program. C does this too, but at least there's a compile
 step to save you.

 The sad part: that's the best case scenario with PHP! If it hadn't
 broken the entire site, I might not have caught it so early and
 then there'd be a real mess on the one page that just happened
 to include both files at once, hidden, lurking...

 But hell, at least if you coincidentally pick the same name as
 your predecessor it doesn't *overwrites* one of your functions at
 runtime like some languages! I kid you not, some popular languages
 actually do that. It blows my mind.



 Anyway, that's bad modularity and a couple of bug prone behaviors
 coming from it. Using a strict naming convention with long, fully
 qualified names (like familyname_functionname convention) might
 be a good thing in those languages, to reduce the odds that two
 team members pick the same name and break code elsewhere in the app.

 In D, however, such things are not necessary. The compiler makes
 it Just Work or fail early and fail meaningfully in the exact
 place where there actually is a problem and nowhere else. Such
 problems cannot occur, so the logic that goes into protecting
 yourself in other languages don't necessarily apply to D.
This actually reminds me of an article I read quite a while ago that praised Erlang for how it handles functions in different modules. It requires that you prefix the name of any function that is imported with the name of the module you are importing from. So, if you want to use the function bar from module foo, you write something like this: foo:bar(1), Now, the nice thing about this is that you immediately see what module this function came from. Also, it helps ensure that your function names are truly unique since they now include the module name. However, the bad part is that you have more to type. Personally, I find that to be a minor issue compared to the benefits, but that's just me. Am I saying this is how it should work in D? No, but I think it can be done now anyway if IIRC. I'll have to look at the docs. Regardless, it may be a good "best practice" for those times when you are working on a large application. Casey
Dec 28 2010
parent reply Adam Ruppe <destructionator gmail.com> writes:
sybrandy wrote:
 Now, the nice thing about this is that you immediately see what
 module this function came from.
Yes, that's like a static import in D. You can also use fully qualified names with a standard import: std.stdio.File -- optional long form for File static import std.stdio; // you *must* use the long form for all names
 However, the bad part is that you have more to type.
That's not the only bad part. It also means refactoring your modules requires changes to the user code too. See my other post here: http://www.digitalmars.com/pnews/read.php?server=news.digitalmars.com&group=digitalmars.D&artnum=125420
Dec 28 2010
next sibling parent reply Andrej Mitrovic <andrej.mitrovich gmail.com> writes:
On 12/28/10, Adam Ruppe <destructionator gmail.com> wrote:
 That's not the only bad part. It also means refactoring your modules
 requires changes to the user code too. See my other post here:
Actually, D is equipped to solve even that problem. If you really want to use fully qualified names and reserve the right to rename a module, you can do this: foo.d: import std.stdio : writeln; void bar() { writeln("bar"); } main.d: static import foo = foo; void main() { foo.bar(); } If you decide to rename the foo module to "foobar", all you need to change is one line in main: static import foo = foobar; This is much saner than bearophile's "force full qualified names by default", since it still allows you to refactor your modules with the minimum amount of changes in the importing code.
Dec 28 2010
next sibling parent bearophile <bearophileHUGS lycos.com> writes:
Andrej Mitrovic:

 This is much saner than bearophile's "force full qualified names by
 default",
I have never said that to force that. I have said to not import all names from a module on default. But there are three ways to avoid that when you want it: list the names you want, use a star, or use a star+pack name (like all). Bye, bearophile
Dec 28 2010
prev sibling next sibling parent Adam Ruppe <destructionator gmail.com> writes:
Andrej Mitrovic wrote:
 Actually, D is equipped to solve even that problem
Indeed. Though, that doesn't cover all the uses of shuffling things around. My oauth.d module currently still includes some of my own specific app, like my own api secret. (It was originally hard coded, then I moved it to a helper function, but the helper function is still in the generic module instead of moved out.) It looks like this: oauthRequest(struct OAuthParams, request details...); immutable myAppParams = OAuthParams("api key", "api secret", other); In user code: oauthRequest(myAppParams, ...); Soon, I'll cut and paste that myAppParams out and move it to the app-specific config.d, making oauth.d itself fully generic. If I was using all fully qualified names, even if renamed: oauth.oauthRequest(oauth.myAppParams, ...); And I'd then have to change all that usage to: oauth.oauthRequest(config.myAppParams, ...); Relatively minor in this situation - at least it would tell me at compile time that oauth.myAppParams no longer exists, so it is an easy fix, but you can see how this could become very annoying as the usage grows.
Dec 28 2010
prev sibling parent sybrandy <sybrandy gmail.com> writes:
On 12/28/2010 12:37 PM, Andrej Mitrovic wrote:
 On 12/28/10, Adam Ruppe<destructionator gmail.com>  wrote:
 That's not the only bad part. It also means refactoring your modules
 requires changes to the user code too. See my other post here:
Actually, D is equipped to solve even that problem. If you really want to use fully qualified names and reserve the right to rename a module, you can do this: foo.d: import std.stdio : writeln; void bar() { writeln("bar"); } main.d: static import foo = foo; void main() { foo.bar(); } If you decide to rename the foo module to "foobar", all you need to change is one line in main: static import foo = foobar;
I do like this and I did think about this for a different reason: avoiding long/obnoxious module names. (E.g. Java-like) However, this is another good reason to use this feature. Casey
Dec 29 2010
prev sibling parent spir <denis.spir gmail.com> writes:
On Tue, 28 Dec 2010 18:37:37 +0100
Andrej Mitrovic <andrej.mitrovich gmail.com> wrote:

 On 12/28/10, Adam Ruppe <destructionator gmail.com> wrote:
 That's not the only bad part. It also means refactoring your modules
 requires changes to the user code too. See my other post here:
=20 Actually, D is equipped to solve even that problem. If you really want to use fully qualified names and reserve the right to rename a module, you can do this: =20 foo.d: import std.stdio : writeln; void bar() { writeln("bar"); } =20 main.d: static import foo =3D foo; void main() { foo.bar(); } =20 If you decide to rename the foo module to "foobar", all you need to change is one line in main: static import foo =3D foobar; =20 This is much saner than bearophile's "force full qualified names by default", since it still allows you to refactor your modules with the minimum amount of changes in the importing code.
Waow! had not guessed that. Thanks for the tip :-) But it does not change a= nything relating to the 'feature' "import every symbol by default", as far = as I can tell. Or what? On the contrary, it seems to me this helps & having= good documentation practice and/or clearer code without potential refactor= ing issues. (Which anyway find&replace "foo." --> "foobar." solves is 99% c= ases.) Denis -- -- -- -- -- -- -- vit esse estrany =E2=98=A3 spir.wikidot.com
Dec 29 2010
prev sibling parent reply bearophile <bearophileHUGS lycos.com> writes:
Adam D. Ruppe:

 Writing out qualified paths by default
 is just awful, and I can't understand why you keep asking for it.
If in your module you don't want to use qualified paths you use: import foo: bar, spam; To import all not-private names of the foo module use: import foo: *; Recently Andrei has said that syntax is blunt, preferring the user defined "all" name solution (an "all" module in a package). So to define a less blunt import semantics you may add a * to the module you use as name pack: import foo: all*; // imports all names in the "all" name pack Bye, bearophile
Dec 28 2010
parent reply bearophile <bearophileHUGS lycos.com> writes:
bearophile:

 import foo: all*; // imports all names in the "all" name pack
Or just: import foo.all: *; Bye, bearophile
Dec 28 2010
parent Walter Bright <newshound2 digitalmars.com> writes:
bearophile wrote:
 bearophile:
 
 import foo: all*; // imports all names in the "all" name pack
Or just: import foo.all: *;
Better yet: import foo.all; and it's even implemented!
Dec 28 2010
prev sibling next sibling parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 12/28/10 12:09 AM, bearophile wrote:
 Andrei:

 FWIW I just posted a response to a question asking for a comparison
 between Clay and D2.

 http://www.reddit.com/r/programming/comments/es2jx/clay_programming_language_wiki/
Just few comments:
 The docs offer very little on Clay's module system (which is rock solid in D2).
D2 module system may be fixed, but currently it's not even bread-solid. The Clay syntax for imports is more similar to what I have desired for D (but that () syntax is not so good): import foo.bar; // Imports module foo.bar as a qualified path // use "foo.bar.bas" to access foo.bar member bas import foo.bar as bar; // Imports module foo.bar with alias bar // use "bar.bas" to access foo.bar member bas import foo.bar.(bas); // Imports member bas from module foo.bar // use "bas" to access foo.bar member bas import foo.bar.(bas as fooBarBas) // Imports member bas with alias fooBarBas import foo.bar.*; // Imports all members from module foo.bar I don't know about Modula3 module system, I will search info about it.
 Clay mentions multiple dispatch as a major feature. Based on extensive
experience in the topic I believe that that's a waste of time. Modern C++
Design has an extensive chapter on multiple dispatch, and I can vouch next to
nobody uses it in the real world. Sure, it's nice to have, but its actual
applicability is limited to shape collision testing and a few toy examples.<
I think double dispatch is enough, it cover most cases and keeps both compiler complexity low enough. If you put double dispatch with a nice syntax in D then maybe people will use it. There are many things that people in other languages use that C++ programmers don't use because using it in C++ is ugly, a pain, unsafe, etc. The visitor pattern is used enough in Java (Scala too was designed to solve this problem).
This is... out there. I can't stop wondering - on what basis do you make such infinitely confident assertions? Have you worked on any large scale C++ system, or on a C++ system of any scale for that matter? Have you built one or more systems in whatever language in which multiple dispatch was an enabling feature? A competent C++ programmer will use a pattern such as Visitor or double dispatch if needed. Double dispatch is not ugly in C++ (as Modern C++ Design has shown ten years ago), and even if it were, people who need it would still use it. Also, Visitor looks and acts at least as good in C++ (compared to e.g. Java) with the help of templates and macros. Speaking from direct experience: although double dispatch as implemented in Modern C++ Design was (and probably still is) best of the breed, based on the feedback I got (better said, I didn't get), I can confidently say people seldom need it. There were major bugs in the implementation that lasted for seven years before anyone noticed them. In contrast, bugs in virtually all other chapters of the book were very quick to show up. This is because people needed the other patterns, but didn't need double dispatch. Please, bearophile, if you are looking for a New Year's resolution, vow to stop feigning more competence than you have. You are plenty good, we all appreciate you, but please stick with what you actually know, which is a lot. I'm sorry but it would be a full-time job for me or anyone to debunk the occasional enormity you write, so I must resort to being frank just this once. Hope you understand. Thank you. Andrei
Dec 27 2010
prev sibling next sibling parent Walter Bright <newshound2 digitalmars.com> writes:
bearophile wrote:
 The docs offer very little on Clay's module system (which is rock solid in
 D2).
D2 module system may be fixed, but currently it's not even bread-solid.
It does all of your wish list, and more. I've pointed this out to you before even in the last month. Please read the documentation. http://www.digitalmars.com/d/2.0/module.html D's anti-hijacking support in imports is unique to D, and second to none. http://www.digitalmars.com/d/2.0/hijack.html
Dec 27 2010
prev sibling parent Lutger Blijdestijn <lutger.blijdestijn gmail.com> writes:
bearophile wrote:

 Andrei:
 
 FWIW I just posted a response to a question asking for a comparison
 between Clay and D2.
 
 
http://www.reddit.com/r/programming/comments/es2jx/clay_programming_language_wiki/
 
 Just few comments:
 
 The docs offer very little on Clay's module system (which is rock solid
 in D2).
D2 module system may be fixed, but currently it's not even bread-solid. The Clay syntax for imports is more similar to what I have desired for D (but that () syntax is not so good): import foo.bar; // Imports module foo.bar as a qualified path // use "foo.bar.bas" to access foo.bar member bas import foo.bar as bar; // Imports module foo.bar with alias bar // use "bar.bas" to access foo.bar member bas import foo.bar.(bas); // Imports member bas from module foo.bar // use "bas" to access foo.bar member bas import foo.bar.(bas as fooBarBas) // Imports member bas with alias fooBarBas import foo.bar.*; // Imports all members from module foo.bar
This looks like a subset of the D module system. The only difference is 'static' import by default and the .* feature which has to be implemented by the library author in D. Are those the things you find missing in D?
 I don't know about Modula3 module system, I will search info about it.
 
 
Clay mentions multiple dispatch as a major feature. Based on extensive
experience in the topic I believe that that's a waste of time. Modern C++
Design has an extensive chapter on multiple dispatch, and I can vouch next
to nobody uses it in the real world. Sure, it's nice to have, but its
actual applicability is limited to shape collision testing and a few toy
examples.<
I think double dispatch is enough, it cover most cases and keeps both compiler complexity low enough. If you put double dispatch with a nice syntax in D then maybe people will use it. There are many things that people in other languages use that C++ programmers don't use because using it in C++ is ugly, a pain, unsafe, etc. The visitor pattern is used enough in Java (Scala too was designed to solve this problem). Bye, bearophile
I think I agree here. I have never programmed much in a language with multiple dispatch, but everytime I see dynamic casting or the visitor pattern in an OOP program I think about how that would be so much better. Honestly its just speculation, but I guess that the lack of interest is because it is not a widely known and available solution. Library implementations have issues, both usability and performance wise.
Dec 28 2010
prev sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 12/27/10 6:55 PM, Andrei Alexandrescu wrote:
 On 12/27/10 12:35 PM, bearophile wrote:
 Through Reddit I have found a link to some information about the Clay
 language, it wants to be (or it will be) a C++-class language, but
 it's not tied to C syntax. It shares several semantic similarities
 with D too. It looks like a cute language:
 https://github.com/jckarter/clay/wiki/
[snip] FWIW I just posted a response to a question asking for a comparison between Clay and D2. http://www.reddit.com/r/programming/comments/es2jx/clay_programming_language_wiki/
That thread is shaping up more and more interesting because it's turning into a discussion of generic programming at large. Andrei
Dec 29 2010
parent reply "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Wed, 29 Dec 2010 14:42:53 -0500, Andrei Alexandrescu  
<SeeWebsiteForEmail erdani.org> wrote:

 On 12/27/10 6:55 PM, Andrei Alexandrescu wrote:
 On 12/27/10 12:35 PM, bearophile wrote:
 Through Reddit I have found a link to some information about the Clay
 language, it wants to be (or it will be) a C++-class language, but
 it's not tied to C syntax. It shares several semantic similarities
 with D too. It looks like a cute language:
 https://github.com/jckarter/clay/wiki/
[snip] FWIW I just posted a response to a question asking for a comparison between Clay and D2. http://www.reddit.com/r/programming/comments/es2jx/clay_programming_language_wiki/
That thread is shaping up more and more interesting because it's turning into a discussion of generic programming at large.
I wanted to address your post in the reddit discussion regarding the issue of operator overloads not being virtual: "This non-issue has been discussed in the D newsgroup. You can implement virtuals on top of non-virtuals efficiently, but not vice versa." I've found some very real problems with that, when implementing operator overloads in dcollections. It's forced me to use the (yet to be deprecated) opXXX forms. Specifically, you cannot use covariance with templated functions without repeating the entire implementation in the derived class. Also, let's not forget, templates can't be used in interfaces, which means no operator overloading in interfaces. Related bug: http://d.puremagic.com/issues/show_bug.cgi?id=4174 -Steve
Dec 29 2010
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 12/29/10 2:10 PM, Steven Schveighoffer wrote:
 On Wed, 29 Dec 2010 14:42:53 -0500, Andrei Alexandrescu
 <SeeWebsiteForEmail erdani.org> wrote:

 On 12/27/10 6:55 PM, Andrei Alexandrescu wrote:
 On 12/27/10 12:35 PM, bearophile wrote:
 Through Reddit I have found a link to some information about the Clay
 language, it wants to be (or it will be) a C++-class language, but
 it's not tied to C syntax. It shares several semantic similarities
 with D too. It looks like a cute language:
 https://github.com/jckarter/clay/wiki/
[snip] FWIW I just posted a response to a question asking for a comparison between Clay and D2. http://www.reddit.com/r/programming/comments/es2jx/clay_programming_language_wiki/
That thread is shaping up more and more interesting because it's turning into a discussion of generic programming at large.
I wanted to address your post in the reddit discussion regarding the issue of operator overloads not being virtual: "This non-issue has been discussed in the D newsgroup. You can implement virtuals on top of non-virtuals efficiently, but not vice versa." I've found some very real problems with that, when implementing operator overloads in dcollections. It's forced me to use the (yet to be deprecated) opXXX forms. Specifically, you cannot use covariance with templated functions without repeating the entire implementation in the derived class.
Glad you're bringing that up. Could you please post an example that summarizes the issue?
 Also, let's not forget, templates can't be used in
 interfaces, which means no operator overloading in interfaces.
Interfaces allow final methods which should help.
 Related bug:

 http://d.puremagic.com/issues/show_bug.cgi?id=4174
That should be what the doctor prescribed. I updated the report and assigned it to Walter. Andrei
Dec 29 2010
parent reply "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Wed, 29 Dec 2010 15:38:27 -0500, Andrei Alexandrescu  
<SeeWebsiteForEmail erdani.org> wrote:

 On 12/29/10 2:10 PM, Steven Schveighoffer wrote:
 On Wed, 29 Dec 2010 14:42:53 -0500, Andrei Alexandrescu
 <SeeWebsiteForEmail erdani.org> wrote:

 On 12/27/10 6:55 PM, Andrei Alexandrescu wrote:
 On 12/27/10 12:35 PM, bearophile wrote:
 Through Reddit I have found a link to some information about the Clay
 language, it wants to be (or it will be) a C++-class language, but
 it's not tied to C syntax. It shares several semantic similarities
 with D too. It looks like a cute language:
 https://github.com/jckarter/clay/wiki/
[snip] FWIW I just posted a response to a question asking for a comparison between Clay and D2. http://www.reddit.com/r/programming/comments/es2jx/clay_programming_language_wiki/
That thread is shaping up more and more interesting because it's turning into a discussion of generic programming at large.
I wanted to address your post in the reddit discussion regarding the issue of operator overloads not being virtual: "This non-issue has been discussed in the D newsgroup. You can implement virtuals on top of non-virtuals efficiently, but not vice versa." I've found some very real problems with that, when implementing operator overloads in dcollections. It's forced me to use the (yet to be deprecated) opXXX forms. Specifically, you cannot use covariance with templated functions without repeating the entire implementation in the derived class.
Glad you're bringing that up. Could you please post an example that summarizes the issue?
With D1: interface List { List opCat(List other); } class LinkList : List { LinkList opCat(List other) {...} } With D2: interface List { List doCat(List other); // implement this in derived class List opBinary(string op)(List other) if (op == "~") { return doCat(other); } } class LinkList : List { LinkList doCat(List other) {...} } // usage; LinkList ll = new LinkList(1, 2, 3); ll = ll ~ ll; // works with D1, fails on D2, "can't assign List to LinkList" Solution is to restate opBinary in all dervied classes with *exact same code* but different return type. I find this solution unacceptable.
 Also, let's not forget, templates can't be used in
 interfaces, which means no operator overloading in interfaces.
Interfaces allow final methods which should help.
operator overloads *must* be templates, and templates aren't allowed, so final methods don't help here. Actually, final methods can't forward covariance either, so we have issues there. I think we need a general solution to the covariance problem. I also have filed a separate bug on covariance not being forwarded by alias.
 Related bug:

 http://d.puremagic.com/issues/show_bug.cgi?id=4174
That should be what the doctor prescribed. I updated the report and assigned it to Walter.
I saw, thanks. -Steve
Dec 29 2010
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 12/29/10 2:58 PM, Steven Schveighoffer wrote:
 On Wed, 29 Dec 2010 15:38:27 -0500, Andrei Alexandrescu
 <SeeWebsiteForEmail erdani.org> wrote:

 On 12/29/10 2:10 PM, Steven Schveighoffer wrote:
 On Wed, 29 Dec 2010 14:42:53 -0500, Andrei Alexandrescu
 <SeeWebsiteForEmail erdani.org> wrote:

 On 12/27/10 6:55 PM, Andrei Alexandrescu wrote:
 On 12/27/10 12:35 PM, bearophile wrote:
 Through Reddit I have found a link to some information about the Clay
 language, it wants to be (or it will be) a C++-class language, but
 it's not tied to C syntax. It shares several semantic similarities
 with D too. It looks like a cute language:
 https://github.com/jckarter/clay/wiki/
[snip] FWIW I just posted a response to a question asking for a comparison between Clay and D2. http://www.reddit.com/r/programming/comments/es2jx/clay_programming_language_wiki/
That thread is shaping up more and more interesting because it's turning into a discussion of generic programming at large.
I wanted to address your post in the reddit discussion regarding the issue of operator overloads not being virtual: "This non-issue has been discussed in the D newsgroup. You can implement virtuals on top of non-virtuals efficiently, but not vice versa." I've found some very real problems with that, when implementing operator overloads in dcollections. It's forced me to use the (yet to be deprecated) opXXX forms. Specifically, you cannot use covariance with templated functions without repeating the entire implementation in the derived class.
Glad you're bringing that up. Could you please post an example that summarizes the issue?
With D1: interface List { List opCat(List other); } class LinkList : List { LinkList opCat(List other) {...} } With D2: interface List { List doCat(List other); // implement this in derived class List opBinary(string op)(List other) if (op == "~") { return doCat(other); } } class LinkList : List { LinkList doCat(List other) {...} } // usage; LinkList ll = new LinkList(1, 2, 3); ll = ll ~ ll; // works with D1, fails on D2, "can't assign List to LinkList" Solution is to restate opBinary in all dervied classes with *exact same code* but different return type. I find this solution unacceptable.
I understand, thanks for taking the time to share. The solution to this matter as I see it is integrated with another topic - usually you want to define groups of operators, which means you'd want to define an entire translation layer from static operators to overridable ones. Here's the code I suggest along those lines. I used a named function instead of a template to avoid 4174: template translateOperators() { auto ohPeeCat(List other) { return doCat(other); } } interface List { List doCat(List other); // implement this in derived class } class LinkList : List { LinkList doCat(List other) { return this; } mixin translateOperators!(); } void main(string[] args) { LinkList ll = new LinkList; ll = ll.ohPeeCat(ll); } The translateOperators template would generally define a battery of operators depending on e.g. whether appropriate implementations are found in the host class (in this case LinkList). Andrei
Dec 29 2010
parent reply "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Wed, 29 Dec 2010 16:14:11 -0500, Andrei Alexandrescu  
<SeeWebsiteForEmail erdani.org> wrote:

 On 12/29/10 2:58 PM, Steven Schveighoffer wrote:
 On Wed, 29 Dec 2010 15:38:27 -0500, Andrei Alexandrescu
 <SeeWebsiteForEmail erdani.org> wrote:

 On 12/29/10 2:10 PM, Steven Schveighoffer wrote:
 On Wed, 29 Dec 2010 14:42:53 -0500, Andrei Alexandrescu
 <SeeWebsiteForEmail erdani.org> wrote:

 On 12/27/10 6:55 PM, Andrei Alexandrescu wrote:
 On 12/27/10 12:35 PM, bearophile wrote:
 Through Reddit I have found a link to some information about the  
 Clay
 language, it wants to be (or it will be) a C++-class language, but
 it's not tied to C syntax. It shares several semantic similarities
 with D too. It looks like a cute language:
 https://github.com/jckarter/clay/wiki/
[snip] FWIW I just posted a response to a question asking for a comparison between Clay and D2. http://www.reddit.com/r/programming/comments/es2jx/clay_programming_language_wiki/
That thread is shaping up more and more interesting because it's turning into a discussion of generic programming at large.
I wanted to address your post in the reddit discussion regarding the issue of operator overloads not being virtual: "This non-issue has been discussed in the D newsgroup. You can implement virtuals on top of non-virtuals efficiently, but not vice versa." I've found some very real problems with that, when implementing operator overloads in dcollections. It's forced me to use the (yet to be deprecated) opXXX forms. Specifically, you cannot use covariance with templated functions without repeating the entire implementation in the derived class.
Glad you're bringing that up. Could you please post an example that summarizes the issue?
With D1: interface List { List opCat(List other); } class LinkList : List { LinkList opCat(List other) {...} } With D2: interface List { List doCat(List other); // implement this in derived class List opBinary(string op)(List other) if (op == "~") { return doCat(other); } } class LinkList : List { LinkList doCat(List other) {...} } // usage; LinkList ll = new LinkList(1, 2, 3); ll = ll ~ ll; // works with D1, fails on D2, "can't assign List to LinkList" Solution is to restate opBinary in all dervied classes with *exact same code* but different return type. I find this solution unacceptable.
I understand, thanks for taking the time to share. The solution to this matter as I see it is integrated with another topic - usually you want to define groups of operators, which means you'd want to define an entire translation layer from static operators to overridable ones. Here's the code I suggest along those lines. I used a named function instead of a template to avoid 4174: template translateOperators() { auto ohPeeCat(List other) { return doCat(other); } } interface List { List doCat(List other); // implement this in derived class } class LinkList : List { LinkList doCat(List other) { return this; } mixin translateOperators!(); } void main(string[] args) { LinkList ll = new LinkList; ll = ll.ohPeeCat(ll); } The translateOperators template would generally define a battery of operators depending on e.g. whether appropriate implementations are found in the host class (in this case LinkList).
I'm assuming you meant this (once the bug is fixed): template translateOperators() { auto opBinary(string op)(List other) {return doCat(other);} if (op == "~") } and adding this mixin to the interface? I find this solution extremely convoluted, not to mention bloated, and how do the docs work? It's like we're going back to C macros! This operator overloading scheme is way more trouble than the original. The thing I find ironic is that with the original operator overloading scheme, the issue was that for types that define multiple operator overloads in a similar fashion, forcing you to repeat boilerplate code. The solution to it was a mixin similar to what you are suggesting. Except now, even mundane and common operator overloads require verbose template definitions (possibly with mixins), and it's the uncommon case that benefits. So really, we haven't made any progress (mixins are still required, except now they will be more common). I think this is one area where D has gotten decidedly worse. I mean, just look at the difference above between defining the opcat operator in D1 and your mixin solution! As a compromise, can we work on a way to forward covariance, or to have the compiler reevaluate the template in more derived types? -Steve
Dec 30 2010
next sibling parent reply Michel Fortin <michel.fortin michelf.com> writes:
On 2010-12-30 10:00:05 -0500, "Steven Schveighoffer" 
<schveiguy yahoo.com> said:

 The thing I find ironic is that with the original operator overloading  
 scheme, the issue was that for types that define multiple operator  
 overloads in a similar fashion, forcing you to repeat boilerplate code. 
   The solution to it was a mixin similar to what you are suggesting.  
 Except  now, even mundane and common operator overloads require verbose 
 template  definitions (possibly with mixins), and it's the uncommon 
 case that  benefits.  So really, we haven't made any progress (mixins 
 are still  required, except now they will be more common).  I think 
 this is one area  where D has gotten decidedly worse.  I mean, just 
 look at the difference  above between defining the opcat operator in D1 
 and your mixin solution!
I'm with you, I preferred the old design.
 As a compromise, can we work on a way to forward covariance, or to have 
  the compiler reevaluate the template in more derived types?
I stubbled upon this yesterday: Template This Parameters TemplateThisParameters are used in member function templates to pick up the type of the this reference. import std.stdio; struct S { const void foo(this T)(int i) { writeln(typeid(T)); } } <http://www.digitalmars.com/d/2.0/template.html> Looks like you could return the type of this this way... -- Michel Fortin michel.fortin michelf.com http://michelf.com/
Dec 30 2010
next sibling parent "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Thu, 30 Dec 2010 10:22:22 -0500, Michel Fortin  
<michel.fortin michelf.com> wrote:

 On 2010-12-30 10:00:05 -0500, "Steven Schveighoffer"  
 <schveiguy yahoo.com> said:

 The thing I find ironic is that with the original operator overloading   
 scheme, the issue was that for types that define multiple operator   
 overloads in a similar fashion, forcing you to repeat boilerplate code.  
   The solution to it was a mixin similar to what you are suggesting.   
 Except  now, even mundane and common operator overloads require verbose  
 template  definitions (possibly with mixins), and it's the uncommon  
 case that  benefits.  So really, we haven't made any progress (mixins  
 are still  required, except now they will be more common).  I think  
 this is one area  where D has gotten decidedly worse.  I mean, just  
 look at the difference  above between defining the opcat operator in D1  
 and your mixin solution!
I'm with you, I preferred the old design.
 As a compromise, can we work on a way to forward covariance, or to have  
  the compiler reevaluate the template in more derived types?
I stubbled upon this yesterday: Template This Parameters TemplateThisParameters are used in member function templates to pick up the type of the this reference. import std.stdio; struct S { const void foo(this T)(int i) { writeln(typeid(T)); } } <http://www.digitalmars.com/d/2.0/template.html> Looks like you could return the type of this this way...
Damn! That's very very close to what I wanted! Thanks for stumbling on that ;) I don't even need to repeat it at derived levels (just tried it out). The one issue I see now is, I have to cast this to T, which involves a runtime cast. From what I understand, covariance does not require a runtime cast penalty, because the adjustment lookup is done at compile time. I wonder if the compiler can optimize out the runtime dynamic cast in this case, Walter? Now, for a working (but not quite as effecient as D1) solution, all I need is the bug fix for allowing templates in interfaces. -Steve
Dec 30 2010
prev sibling next sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 12/30/10 9:22 AM, Michel Fortin wrote:
 On 2010-12-30 10:00:05 -0500, "Steven Schveighoffer"
 <schveiguy yahoo.com> said:

 The thing I find ironic is that with the original operator overloading
 scheme, the issue was that for types that define multiple operator
 overloads in a similar fashion, forcing you to repeat boilerplate
 code. The solution to it was a mixin similar to what you are
 suggesting. Except now, even mundane and common operator overloads
 require verbose template definitions (possibly with mixins), and it's
 the uncommon case that benefits. So really, we haven't made any
 progress (mixins are still required, except now they will be more
 common). I think this is one area where D has gotten decidedly worse.
 I mean, just look at the difference above between defining the opcat
 operator in D1 and your mixin solution!
I'm with you, I preferred the old design.
This is water under the bridge now, but I am definitely interested. What are the reasons for which you find the old design better?
 As a compromise, can we work on a way to forward covariance, or to
 have the compiler reevaluate the template in more derived types?
I stubbled upon this yesterday: Template This Parameters TemplateThisParameters are used in member function templates to pick up the type of the this reference. import std.stdio; struct S { const void foo(this T)(int i) { writeln(typeid(T)); } } <http://www.digitalmars.com/d/2.0/template.html> Looks like you could return the type of this this way...
typeof(this) works too. Andrei
Dec 30 2010
next sibling parent reply "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Thu, 30 Dec 2010 11:02:43 -0500, Andrei Alexandrescu  
<SeeWebsiteForEmail erdani.org> wrote:

 On 12/30/10 9:22 AM, Michel Fortin wrote:
 I stubbled upon this yesterday:

 Template This Parameters

 TemplateThisParameters are used in member function templates to pick up
 the type of the this reference.
 import std.stdio;

 struct S
 {
 const void foo(this T)(int i)
 {
 writeln(typeid(T));
 }
 }

 <http://www.digitalmars.com/d/2.0/template.html>

 Looks like you could return the type of this this way...
typeof(this) works too.
Nope. the template this parameter assumes the type of 'this' at the call site, not at declaration. typeof(this) means the type at declaration time. -Steve
Dec 30 2010
parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 12/30/10 10:10 AM, Steven Schveighoffer wrote:
 On Thu, 30 Dec 2010 11:02:43 -0500, Andrei Alexandrescu
 <SeeWebsiteForEmail erdani.org> wrote:

 On 12/30/10 9:22 AM, Michel Fortin wrote:
 I stubbled upon this yesterday:

 Template This Parameters

 TemplateThisParameters are used in member function templates to pick up
 the type of the this reference.
 import std.stdio;

 struct S
 {
 const void foo(this T)(int i)
 {
 writeln(typeid(T));
 }
 }

 <http://www.digitalmars.com/d/2.0/template.html>

 Looks like you could return the type of this this way...
typeof(this) works too.
Nope. the template this parameter assumes the type of 'this' at the call site, not at declaration. typeof(this) means the type at declaration time.
Got it. Now I'm just waiting for your next post's ire :o). Andrei
Dec 30 2010
prev sibling parent reply Michel Fortin <michel.fortin michelf.com> writes:
On 2010-12-30 11:02:43 -0500, Andrei Alexandrescu 
<SeeWebsiteForEmail erdani.org> said:

 On 12/30/10 9:22 AM, Michel Fortin wrote:
 On 2010-12-30 10:00:05 -0500, "Steven Schveighoffer"
 <schveiguy yahoo.com> said:
 
 The thing I find ironic is that with the original operator overloading
 scheme, the issue was that for types that define multiple operator
 overloads in a similar fashion, forcing you to repeat boilerplate
 code. The solution to it was a mixin similar to what you are
 suggesting. Except now, even mundane and common operator overloads
 require verbose template definitions (possibly with mixins), and it's
 the uncommon case that benefits. So really, we haven't made any
 progress (mixins are still required, except now they will be more
 common). I think this is one area where D has gotten decidedly worse.
 I mean, just look at the difference above between defining the opcat
 operator in D1 and your mixin solution!
I'm with you, I preferred the old design.
This is water under the bridge now, but I am definitely interested. What are the reasons for which you find the old design better?
First it was simpler to understand. Second it worked well with inheritance. The current design requires that you know of templates and template constrains, and it requires complicated workarounds if you're dealing with inheritance (as illustrated by this thread). Basically, we've made a simple, easy to understand feature into an expert-only one. And for what sakes? Sure the new design has the advantage that you can define multiple operators in one go. But for all the cases where you don't define operators to be the same variation on a theme, and even more for those involving inheritance, it's more complicated now. And defining multiple operators in one go wouldn't have been so hard with the older regime either. All you needed was a mixin to automatically generate properly named functions for each operator the opBinary template can instantiate. I was always skeptical of this new syntax, and this hasn't changed. -- Michel Fortin michel.fortin michelf.com http://michelf.com/
Dec 30 2010
next sibling parent so <so so.do> writes:
 First it was simpler to understand. Second it worked well with  
 inheritance.

 The current design requires that you know of templates and template  
 constrains, and it requires complicated workarounds if you're dealing  
 with inheritance (as illustrated by this thread). Basically, we've made  
 a simple, easy to understand feature into an expert-only one.

 And for what sakes? Sure the new design has the advantage that you can  
 define multiple operators in one go. But for all the cases where you  
 don't define operators to be the same variation on a theme, and even  
 more for those involving inheritance, it's more complicated now. And  
 defining multiple operators in one go wouldn't have been so hard with  
 the older regime either. All you needed was a mixin to automatically  
 generate properly named functions for each operator the opBinary  
 template can instantiate.

 I was always skeptical of this new syntax, and this hasn't changed.
Old style was nothing but merely C++ with named operators. Shortcomings were obvious and i have always thinking of a solution exactly like this one. Now it is quite template friendly, as it should be. For inheritance, i am unable to find a use case that makes sense. -- Using Opera's revolutionary email client: http://www.opera.com/mail/
Dec 30 2010
prev sibling parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 12/30/10 11:37 AM, Michel Fortin wrote:
 On 2010-12-30 11:02:43 -0500, Andrei Alexandrescu
 <SeeWebsiteForEmail erdani.org> said:

 On 12/30/10 9:22 AM, Michel Fortin wrote:
 On 2010-12-30 10:00:05 -0500, "Steven Schveighoffer"
 <schveiguy yahoo.com> said:

 The thing I find ironic is that with the original operator overloading
 scheme, the issue was that for types that define multiple operator
 overloads in a similar fashion, forcing you to repeat boilerplate
 code. The solution to it was a mixin similar to what you are
 suggesting. Except now, even mundane and common operator overloads
 require verbose template definitions (possibly with mixins), and it's
 the uncommon case that benefits. So really, we haven't made any
 progress (mixins are still required, except now they will be more
 common). I think this is one area where D has gotten decidedly worse.
 I mean, just look at the difference above between defining the opcat
 operator in D1 and your mixin solution!
I'm with you, I preferred the old design.
This is water under the bridge now, but I am definitely interested. What are the reasons for which you find the old design better?
First it was simpler to understand. Second it worked well with inheritance. The current design requires that you know of templates and template constrains, and it requires complicated workarounds if you're dealing with inheritance (as illustrated by this thread). Basically, we've made a simple, easy to understand feature into an expert-only one. And for what sakes? Sure the new design has the advantage that you can define multiple operators in one go. But for all the cases where you don't define operators to be the same variation on a theme, and even more for those involving inheritance, it's more complicated now. And defining multiple operators in one go wouldn't have been so hard with the older regime either. All you needed was a mixin to automatically generate properly named functions for each operator the opBinary template can instantiate. I was always skeptical of this new syntax, and this hasn't changed.
Thanks for the feedback. So let me make sure I understand your arguments. First, you mention that the old design is simpler. Second, you mention that the old design worked better with inheritance and with cases in which each operator needs a separate definition. I partially (only to a small extent) agree with the first and I disagree with the second. (Overall my opinion that the new design is a vast improvement hasn't changed.) But I didn't ask for your opinion to challenge or debate it - thanks again for taking the time to share. Andrei
Dec 30 2010
prev sibling parent reply "Simen kjaeraas" <simen.kjaras gmail.com> writes:
Michel Fortin <michel.fortin michelf.com> wrote:


 I stubbled upon this yesterday:

 	Template This Parameters

 	TemplateThisParameters are used in member function templates to pick up  
 the type of the this reference.
 	import std.stdio;

 	struct S
 	{
 		const void foo(this T)(int i)
 		{
 			writeln(typeid(T));
 		}
 	}

 <http://www.digitalmars.com/d/2.0/template.html>

 Looks like you could return the type of this this way...
The problem of template this parameters is that it doesn't work as one might expect: class A { void bar( this T )( ) { writeln( typeid( T ) ); } } class B : A { } void main( ) { A a = new B; a.bar(); } Surely this will print modulename.B, right? Wrong. It prints modulename.A, as A is the type of the reference from which the function is called. It looks like a way to automagically override a function for each subclass, but no. -- Simen
Dec 30 2010
parent reply Andrej Mitrovic <andrej.mitrovich gmail.com> writes:
On 12/31/10, Simen kjaeraas <simen.kjaras gmail.com> wrote:

This will give you both:

class A
{
    void bar(this T) ( )
    {
        writeln(typeid(T));
        writeln(typeid(this));
    }
}

class B : A
{
}

void main( )
{
    A a = new B;
    a.bar();
}
Dec 30 2010
parent reply "Simen kjaeraas" <simen.kjaras gmail.com> writes:
Andrej Mitrovic <andrej.mitrovich gmail.com> wrote:

 On 12/31/10, Simen kjaeraas <simen.kjaras gmail.com> wrote:

 This will give you both:

 class A
 {
     void bar(this T) ( )
     {
         writeln(typeid(T));
         writeln(typeid(this));
     }
 }

 class B : A
 {
 }

 void main( )
 {
     A a = new B;
     a.bar();
 }
Indeed it will. Now, if you look closely, you will see that typeid(T) is A, while typeid(this) is B. Testing further: class A { void baz() { writeln(typeid(this)); } } class B : A { } void main() { A a = new B; a.baz(); // prints B. } We thus see that the template this parameter has absolutely no value. -- Simen
Dec 30 2010
next sibling parent Andrej Mitrovic <andrej.mitrovich gmail.com> writes:
On 12/31/10, Simen kjaeraas <simen.kjaras gmail.com> wrote:
 We thus see that the template this parameter has absolutely no value.
Oh, you thought the D documentation is describing features *that work*? Ha! Classic mistake. You see, the D documentation is about showcasing features which don't work, while TDPL is about showcasing features which don't even exist yet.
Dec 30 2010
prev sibling parent reply "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Thu, 30 Dec 2010 21:14:28 -0500, Simen kjaeraas  
<simen.kjaras gmail.com> wrote:

 Andrej Mitrovic <andrej.mitrovich gmail.com> wrote:

 On 12/31/10, Simen kjaeraas <simen.kjaras gmail.com> wrote:

 This will give you both:

 class A
 {
     void bar(this T) ( )
     {
         writeln(typeid(T));
         writeln(typeid(this));
     }
 }

 class B : A
 {
 }

 void main( )
 {
     A a = new B;
     a.bar();
 }
Indeed it will. Now, if you look closely, you will see that typeid(T) is A, while typeid(this) is B. Testing further: class A { void baz() { writeln(typeid(this)); } } class B : A { } void main() { A a = new B; a.baz(); // prints B. } We thus see that the template this parameter has absolutely no value.
No, it does have value: class A { string x; T setX(this T, U)(U newx) { this.x = to!string(newx); return cast(T)this; } } class B : A { int y; void setY(int newy) { this.y = newy; } } void main() { auto b = new B; b.setX(5).setY(6); } Hey, look! covariance with templates :) Now if only templates worked in interfaces... I also have to write a helper function to eliminate that cast (which does a runtime lookup). -Steve
Dec 31 2010
parent "Simen kjaeraas" <simen.kjaras gmail.com> writes:
Steven Schveighoffer <schveiguy yahoo.com> wrote:

 We thus see that the template this parameter has absolutely no value.
No, it does have value:
[snip]
 Hey, look! covariance with templates :)
Ah, yes indeed. It seems I have misunderstood the purpose of the feature. -- Simen
Dec 31 2010
prev sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 12/30/10 9:00 AM, Steven Schveighoffer wrote:
 On Wed, 29 Dec 2010 16:14:11 -0500, Andrei Alexandrescu
 <SeeWebsiteForEmail erdani.org> wrote:

 On 12/29/10 2:58 PM, Steven Schveighoffer wrote:
 On Wed, 29 Dec 2010 15:38:27 -0500, Andrei Alexandrescu
 <SeeWebsiteForEmail erdani.org> wrote:

 On 12/29/10 2:10 PM, Steven Schveighoffer wrote:
 On Wed, 29 Dec 2010 14:42:53 -0500, Andrei Alexandrescu
 <SeeWebsiteForEmail erdani.org> wrote:

 On 12/27/10 6:55 PM, Andrei Alexandrescu wrote:
 On 12/27/10 12:35 PM, bearophile wrote:
 Through Reddit I have found a link to some information about the
 Clay
 language, it wants to be (or it will be) a C++-class language, but
 it's not tied to C syntax. It shares several semantic similarities
 with D too. It looks like a cute language:
 https://github.com/jckarter/clay/wiki/
[snip] FWIW I just posted a response to a question asking for a comparison between Clay and D2. http://www.reddit.com/r/programming/comments/es2jx/clay_programming_language_wiki/
That thread is shaping up more and more interesting because it's turning into a discussion of generic programming at large.
I wanted to address your post in the reddit discussion regarding the issue of operator overloads not being virtual: "This non-issue has been discussed in the D newsgroup. You can implement virtuals on top of non-virtuals efficiently, but not vice versa." I've found some very real problems with that, when implementing operator overloads in dcollections. It's forced me to use the (yet to be deprecated) opXXX forms. Specifically, you cannot use covariance with templated functions without repeating the entire implementation in the derived class.
Glad you're bringing that up. Could you please post an example that summarizes the issue?
With D1: interface List { List opCat(List other); } class LinkList : List { LinkList opCat(List other) {...} } With D2: interface List { List doCat(List other); // implement this in derived class List opBinary(string op)(List other) if (op == "~") { return doCat(other); } } class LinkList : List { LinkList doCat(List other) {...} } // usage; LinkList ll = new LinkList(1, 2, 3); ll = ll ~ ll; // works with D1, fails on D2, "can't assign List to LinkList" Solution is to restate opBinary in all dervied classes with *exact same code* but different return type. I find this solution unacceptable.
I understand, thanks for taking the time to share. The solution to this matter as I see it is integrated with another topic - usually you want to define groups of operators, which means you'd want to define an entire translation layer from static operators to overridable ones. Here's the code I suggest along those lines. I used a named function instead of a template to avoid 4174: template translateOperators() { auto ohPeeCat(List other) { return doCat(other); } } interface List { List doCat(List other); // implement this in derived class } class LinkList : List { LinkList doCat(List other) { return this; } mixin translateOperators!(); } void main(string[] args) { LinkList ll = new LinkList; ll = ll.ohPeeCat(ll); } The translateOperators template would generally define a battery of operators depending on e.g. whether appropriate implementations are found in the host class (in this case LinkList).
I'm assuming you meant this (once the bug is fixed): template translateOperators() { auto opBinary(string op)(List other) {return doCat(other);} if (op == "~") } and adding this mixin to the interface?
In fact if the type doesn't define doCat the operator shouldn't be generated. auto opBinary(string op, T)(T other) {return doCat(other);} if (op == "~" && is(typeof(doCat(other)))) The other thing that I didn't mention and that I think it would save you some grief is that this is meant to be a once-for-all library solution, not code that needs to be written by the user. In fact I'm thinking the mixin should translate from the new scheme to the old one. So for people who want to use operator overloading with inheritance we can say: just import std.typecons and mixin(translateOperators()) in your class definition. I think this is entirely reasonable.
 I find this solution extremely convoluted, not to mention bloated, and
 how do the docs work? It's like we're going back to C macros! This
 operator overloading scheme is way more trouble than the original.
How do you mean bloated? For documentation you specify in the documentation of the type what operators it supports, or for each named method you specify that operator xxx forwards to it.
 The thing I find ironic is that with the original operator overloading
 scheme, the issue was that for types that define multiple operator
 overloads in a similar fashion, forcing you to repeat boilerplate code.
 The solution to it was a mixin similar to what you are suggesting.
 Except now, even mundane and common operator overloads require verbose
 template definitions (possibly with mixins), and it's the uncommon case
 that benefits.
Not at all. The common case is shorter and simpler. I wrote the chapter on operator overloading twice, once for the old scheme and once for the new one. It uses commonly-encountered designs for its code samples. The chapter and its code samples got considerably shorter in the second version. You can't blow your one example into an epic disaster.
 So really, we haven't made any progress (mixins are still
 required, except now they will be more common). I think this is one area
 where D has gotten decidedly worse. I mean, just look at the difference
 above between defining the opcat operator in D1 and your mixin solution!
I very strongly believe the new operator overloading is a vast improvement over the existing one and over most of today's languages. We shouldn't discount all of its advantages and focus exclusively on covariance, which is a rather obscure facility. Using operator overloading in conjunction with class inheritance is rare. Rare as it is, we need to allow it and make it convenient. I believe this is eminently possible along the lines discussed in this thread.
 As a compromise, can we work on a way to forward covariance, or to have
 the compiler reevaluate the template in more derived types?
I understand. I've had this lure a few times, too. The concern there is that this is a potentially surprising change. Andrei
Dec 30 2010
parent reply "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Thu, 30 Dec 2010 11:00:20 -0500, Andrei Alexandrescu  
<SeeWebsiteForEmail erdani.org> wrote:

 On 12/30/10 9:00 AM, Steven Schveighoffer wrote:
 I'm assuming you meant this (once the bug is fixed):

 template translateOperators()
 {
 auto opBinary(string op)(List other) {return doCat(other);} if (op ==  
 "~")
 }

 and adding this mixin to the interface?
In fact if the type doesn't define doCat the operator shouldn't be generated. auto opBinary(string op, T)(T other) {return doCat(other);} if (op == "~" && is(typeof(doCat(other)))) The other thing that I didn't mention and that I think it would save you some grief is that this is meant to be a once-for-all library solution, not code that needs to be written by the user. In fact I'm thinking the mixin should translate from the new scheme to the old one. So for people who want to use operator overloading with inheritance we can say: just import std.typecons and mixin(translateOperators()) in your class definition. I think this is entirely reasonable.
I'd have to see how it works. I also thought the new operator overloading scheme was reasonable -- until I tried to use it. Note this is even more bloated because you generate one function per pair of types used in concatenation, vs. one function per class defined.
 I find this solution extremely convoluted, not to mention bloated, and
 how do the docs work? It's like we're going back to C macros! This
 operator overloading scheme is way more trouble than the original.
How do you mean bloated? For documentation you specify in the documentation of the type what operators it supports, or for each named method you specify that operator xxx forwards to it.
I mean bloated because you are generating template functions that just forward to other functions. Those functions are compiled in and take up space, even if they are inlined out. Let's also realize that the mixin is going to be required *per interface* and *per class*, meaning even more bloat. I agree if there is a "standard" way of forwarding with a library mixin, the documentation will be reasonable, since readers should be able to get used to looking for the 'atlernative' operators.
 The thing I find ironic is that with the original operator overloading
 scheme, the issue was that for types that define multiple operator
 overloads in a similar fashion, forcing you to repeat boilerplate code.
 The solution to it was a mixin similar to what you are suggesting.
 Except now, even mundane and common operator overloads require verbose
 template definitions (possibly with mixins), and it's the uncommon case
 that benefits.
Not at all. The common case is shorter and simpler. I wrote the chapter on operator overloading twice, once for the old scheme and once for the new one. It uses commonly-encountered designs for its code samples. The chapter and its code samples got considerably shorter in the second version. You can't blow your one example into an epic disaster.
The case for overloading a single operator is shorter and simpler with the old method: auto opAdd(Foo other) vs. auto opBinary(string op)(Foo other) if (op == "+") Where the new scheme wins in brevity (for written code at least, and certainly not simpler to understand) is cases where: 1. inheritance is not used 2. you can consolidate many overloads into one function. So the question is, how many times does one define operator overloading on a multitude of operators *with the same code* vs. how many times does one define a few operators or defines the operators with different code? In my experience, I have not yet defined a type that uses a multitude of operators with the same code. In fact, I have only defined the "~=" and "~" operators for the most part. So I'd say, while my example is not proof that this is a disaster, I think it shows the change in operator overloading cannot yet be declared a success. One good example does not prove anything just like one bad example does not prove anything.
 So really, we haven't made any progress (mixins are still
 required, except now they will be more common). I think this is one area
 where D has gotten decidedly worse. I mean, just look at the difference
 above between defining the opcat operator in D1 and your mixin solution!
I very strongly believe the new operator overloading is a vast improvement over the existing one and over most of today's languages.
I haven't had that experience. This is just me talking. Maybe others believe it is good. I agree that the flexibility is good, I really think it should have that kind of flexibility. Especially when we start talking about the whole opAddAssign mess that was in D1. It also allows making wrapper types easier. The problem with flexibility is that it comes with complexity. Most programmers looking to understand how to overload operators in D are going to be daunted by having to use both templates and template constraints, and possibly mixins. There once was a discussion on how to improve operators on the phobos mailing list (don't have the history, because i think it was on erdani.com). Essentially, the two things were: 1) let's make it possible to easily specify template constraints for typed parameters (such as string) like this: auto opBinary("+")(Foo other) which would look far less complex and verbose than the current incarnation. And simple to define when all you need is one or two operators. 2) make template instantiations that provably evaluate to a single instance virtual. Or have a way to designate they should be virtual. e.g. the above operator syntax can only have one instantiation.
 We shouldn't discount all of its advantages and focus exclusively on  
 covariance, which is a rather obscure facility.
I respectfully disagree. Covariance is very important when using class hierarchies, because to have something that returns itself degrade into a basic interface is very cumbersome. I'd say dcollections would be quite clunky if it weren't for covariance (not just for operator overloads). It feels along the same lines as inout -- where inout allows you to continue using your same type with the same constancy, covariance allows you to continue to use the most derived type that you have.
 Using operator overloading in conjunction with class inheritance is rare.
I don't use operator overloads and class inheritance, but I do use operator overloads with interfaces. I think rare is not the right term, it's somewhat infrequent, but chances are if you do a lot of interfaces, you will encounter it at least once. It certainly doesn't dominate the API being defined.
 Rare as it is, we need to allow it and make it convenient. I believe  
 this is eminently possible along the lines discussed in this thread.
Convenience is good. I hope we can do it at a lower exe footprint cost than what you have proposed.
 As a compromise, can we work on a way to forward covariance, or to have
 the compiler reevaluate the template in more derived types?
I understand. I've had this lure a few times, too. The concern there is that this is a potentially surprising change.
Actually, the functionality almost exists in template this parameters. At least, the reevaluation part is working. However, you still must incur a performance penalty to cast to the derived type, plus the template nature of it adds unnecessary bloat. -Steve
Dec 30 2010
next sibling parent so <so so.do> writes:
 In my experience, I have not yet defined a type that uses a multitude of  
 operators with the same code.  In fact, I have only defined the "~=" and  
 "~" operators for the most part.

 So I'd say, while my example is not proof that this is a disaster, I  
 think it shows the change in operator overloading cannot yet be declared  
 a success.  One good example does not prove anything just like one bad  
 example does not prove anything.
Operator overloading shines on numeric code, which i guess the targeted audience for this feature. In this case, you mostly change a single character and that is the operator.
 I haven't had that experience.  This is just me talking.  Maybe others  
 believe it is good.
This new scheme is just pure win, again for numeric coding.
 Using operator overloading in conjunction with class inheritance is  
 rare.
So rare that if you see operator overloading and virtual inheritance, you'd better be sure there is not something fishy going on. -- Using Opera's revolutionary email client: http://www.opera.com/mail/
Dec 30 2010
prev sibling next sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 12/30/10 11:08 AM, Steven Schveighoffer wrote:
 I'd have to see how it works. I also thought the new operator
 overloading scheme was reasonable -- until I tried to use it.
You mean until you tried to use it /once/.
 Note this is even more bloated because you generate one function per
 pair of types used in concatenation, vs. one function per class defined.
That function is inlined and vanishes out of existence. I wish one day we'd characterize this bloating issue more precisely. Right now anything generic has the "bloated!!" alarm stuck to it indiscriminately.
 How do you mean bloated? For documentation you specify in the
 documentation of the type what operators it supports, or for each
 named method you specify that operator xxx forwards to it.
I mean bloated because you are generating template functions that just forward to other functions. Those functions are compiled in and take up space, even if they are inlined out.
I think we can safely leave this matter to compiler technology.
 Let's also realize that the mixin is going to be required *per
 interface* and *per class*, meaning even more bloat.
The bloating argument is a complete red herring in this case. I do agree that generally it could be a concern and I also agree that the compiler needs to be improved in that regard. But by and large I think we can calmly and safely think that a simple short function is not a source of worry.
 I agree if there is a "standard" way of forwarding with a library mixin,
 the documentation will be reasonable, since readers should be able to
 get used to looking for the 'atlernative' operators.
Whew :o).
 The thing I find ironic is that with the original operator overloading
 scheme, the issue was that for types that define multiple operator
 overloads in a similar fashion, forcing you to repeat boilerplate code.
 The solution to it was a mixin similar to what you are suggesting.
 Except now, even mundane and common operator overloads require verbose
 template definitions (possibly with mixins), and it's the uncommon case
 that benefits.
Not at all. The common case is shorter and simpler. I wrote the chapter on operator overloading twice, once for the old scheme and once for the new one. It uses commonly-encountered designs for its code samples. The chapter and its code samples got considerably shorter in the second version. You can't blow your one example into an epic disaster.
The case for overloading a single operator is shorter and simpler with the old method: auto opAdd(Foo other) vs. auto opBinary(string op)(Foo other) if (op == "+") Where the new scheme wins in brevity (for written code at least, and certainly not simpler to understand) is cases where: 1. inheritance is not used 2. you can consolidate many overloads into one function. So the question is, how many times does one define operator overloading on a multitude of operators *with the same code* vs. how many times does one define a few operators or defines the operators with different code? In my experience, I have not yet defined a type that uses a multitude of operators with the same code. In fact, I have only defined the "~=" and "~" operators for the most part.
Based on extensive experience with operator overloading in C++ and on having read related code in other languages, I can firmly say both of (1) and (2) are the overwhelmingly common case.
 So I'd say, while my example is not proof that this is a disaster, I
 think it shows the change in operator overloading cannot yet be declared
 a success. One good example does not prove anything just like one bad
 example does not prove anything.
Many good examples do prove a ton though. Just off the top of my head: - complex numbers - checked integers - checked floating point numbers - ranged/constrained numbers - big int - big float - matrices and vectors - dimensional analysis (SI units) - rational numbers - fixed-point numbers If I agree with something is that opCat is an oddity here as it doesn't usually group with others. Probably it would have helped if opCat would have been left named (just like opEquals or opCmp) but then uniformity has its advantages too. I don't think it's a disaster one way or another, but I do understand how opCat in particular is annoying to your case.
 So really, we haven't made any progress (mixins are still
 required, except now they will be more common). I think this is one area
 where D has gotten decidedly worse. I mean, just look at the difference
 above between defining the opcat operator in D1 and your mixin solution!
I very strongly believe the new operator overloading is a vast improvement over the existing one and over most of today's languages.
I haven't had that experience. This is just me talking. Maybe others believe it is good. I agree that the flexibility is good, I really think it should have that kind of flexibility. Especially when we start talking about the whole opAddAssign mess that was in D1. It also allows making wrapper types easier. The problem with flexibility is that it comes with complexity. Most programmers looking to understand how to overload operators in D are going to be daunted by having to use both templates and template constraints, and possibly mixins.
Most programmers looking to understand how to overload operators in D will need to bundle them (see the common case argument above) and will go with the TDPL examples, which are clear, short, simple, and useful.
 There once was a discussion on how to improve operators on the phobos
 mailing list (don't have the history, because i think it was on
 erdani.com). Essentially, the two things were:

 1) let's make it possible to easily specify template constraints for
 typed parameters (such as string) like this:

 auto opBinary("+")(Foo other)

 which would look far less complex and verbose than the current
 incarnation. And simple to define when all you need is one or two
 operators.
I don't see this slight syntactic special case a net improvement over what we have.
 2) make template instantiations that provably evaluate to a single
 instance virtual. Or have a way to designate they should be virtual.
 e.g. the above operator syntax can only have one instantiation.
This may be worth exploring, but since template constraints are arbitrary expressions I fear it will become a mess of special cases designed to avoid the Turing tarpit.
 We shouldn't discount all of its advantages and focus exclusively on
 covariance, which is a rather obscure facility.
I respectfully disagree. Covariance is very important when using class hierarchies, because to have something that returns itself degrade into a basic interface is very cumbersome. I'd say dcollections would be quite clunky if it weren't for covariance (not just for operator overloads). It feels along the same lines as inout -- where inout allows you to continue using your same type with the same constancy, covariance allows you to continue to use the most derived type that you have.
Okay, I understand.
 Using operator overloading in conjunction with class inheritance is rare.
I don't use operator overloads and class inheritance, but I do use operator overloads with interfaces. I think rare is not the right term, it's somewhat infrequent, but chances are if you do a lot of interfaces, you will encounter it at least once. It certainly doesn't dominate the API being defined.
Maybe a more appropriate characterization is that you use catenation with interfaces.
 Rare as it is, we need to allow it and make it convenient. I believe
 this is eminently possible along the lines discussed in this thread.
Convenience is good. I hope we can do it at a lower exe footprint cost than what you have proposed.
We need to destroy Walter over that code bloating thing :o).
 As a compromise, can we work on a way to forward covariance, or to have
 the compiler reevaluate the template in more derived types?
I understand. I've had this lure a few times, too. The concern there is that this is a potentially surprising change.
Actually, the functionality almost exists in template this parameters. At least, the reevaluation part is working. However, you still must incur a performance penalty to cast to the derived type, plus the template nature of it adds unnecessary bloat.
Saw that. I have a suspicion that we'll see a solid solution from you soon! Andrei
Dec 30 2010
next sibling parent reply =?UTF-8?B?IkrDqXLDtG1lIE0uIEJlcmdlciI=?= <jeberger free.fr> writes:
Andrei Alexandrescu wrote:
 Many good examples do prove a ton though. Just off the top of my head:
=20
 - complex numbers
Multiplication and division are different from each other and from addition and subtraction.
 - checked integers
 - checked floating point numbers
 - ranged/constrained numbers
More or less the same case, so I'm not sure that they make three. Other than that agreed.
 - big int
 - big float
 - matrices and vectors
 - dimensional analysis (SI units)
 - rational numbers
 - fixed-point numbers
For all of those, multiplication and division are different from each other and from addition and subtraction. So what your examples do is actually prove *Steven's* point: most of the time, the code is not shared between operators. Jerome --=20 mailto:jeberger free.fr http://jeberger.free.fr Jabber: jeberger jabber.fr
Dec 30 2010
next sibling parent reply so <so so.do> writes:
On Thu, 30 Dec 2010 21:15:30 +0200, J=E9r=F4me M. Berger <jeberger free.=
fr>  =

wrote:

 Andrei Alexandrescu wrote:
 Many good examples do prove a ton though. Just off the top of my head=
:
 - complex numbers
Multiplication and division are different from each other and from addition and subtraction.
 - checked integers
 - checked floating point numbers
 - ranged/constrained numbers
More or less the same case, so I'm not sure that they make three. Other than that agreed.
 - big int
 - big float
 - matrices and vectors
 - dimensional analysis (SI units)
 - rational numbers
 - fixed-point numbers
For all of those, multiplication and division are different from each other and from addition and subtraction. So what your examples do is actually prove *Steven's* point: most of the time, the code is not shared between operators. Jerome
First, most of these don't even have a division operator defined in math= . Second, you prove Andrei's point, not the other way around, since it mak= es = the generic case easier, particular case harder. -- = Using Opera's revolutionary email client: http://www.opera.com/mail/
Dec 30 2010
next sibling parent so <so so.do> writes:
s/most/some

-- 
Using Opera's revolutionary email client: http://www.opera.com/mail/
Dec 30 2010
prev sibling parent reply =?UTF-8?B?IkrDqXLDtG1lIE0uIEJlcmdlciI=?= <jeberger free.fr> writes:
so wrote:
 On Thu, 30 Dec 2010 21:15:30 +0200, J=C3=A9r=C3=B4me M. Berger <jeberge=
r free.fr>
 wrote:
=20
 Andrei Alexandrescu wrote:
 Many good examples do prove a ton though. Just off the top of my head=
:
 - complex numbers
Multiplication and division are different from each other and from=
 addition and subtraction.

 - checked integers
 - checked floating point numbers
 - ranged/constrained numbers
More or less the same case, so I'm not sure that they make three. Other than that agreed.
 - big int
 - big float
 - matrices and vectors
 - dimensional analysis (SI units)
 - rational numbers
 - fixed-point numbers
For all of those, multiplication and division are different from each other and from addition and subtraction. So what your examples do is actually prove *Steven's* point: most of the time, the code is not shared between operators. Jerome
=20 First, most of these don't even have a division operator defined in mat=
h. ?? The only type in this list without a division operator is vector all the others have it.
 Second, you prove Andrei's point, not the other way around, since it
 makes the generic case easier, particular case harder.
=20
I'm sorry? What do you call the "generic case" here? All this list shows is that each operator needs to be implemented individually anyway. Andrei's point was exactly the reverse: he claims that most operators can be implemented in groups which clearly isn't the case here. Jerome --=20 mailto:jeberger free.fr http://jeberger.free.fr Jabber: jeberger jabber.fr
Dec 30 2010
next sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 12/30/10 3:36 PM, "Jérôme M. Berger" wrote:
 so wrote:
 On Thu, 30 Dec 2010 21:15:30 +0200, Jérôme M. Berger<jeberger free.fr>
 wrote:

 Andrei Alexandrescu wrote:
 Many good examples do prove a ton though. Just off the top of my head:

 - complex numbers
Multiplication and division are different from each other and from addition and subtraction.
 - checked integers
 - checked floating point numbers
 - ranged/constrained numbers
More or less the same case, so I'm not sure that they make three. Other than that agreed.
 - big int
 - big float
 - matrices and vectors
 - dimensional analysis (SI units)
 - rational numbers
 - fixed-point numbers
For all of those, multiplication and division are different from each other and from addition and subtraction. So what your examples do is actually prove *Steven's* point: most of the time, the code is not shared between operators. Jerome
First, most of these don't even have a division operator defined in math.
?? The only type in this list without a division operator is vector all the others have it.
 Second, you prove Andrei's point, not the other way around, since it
 makes the generic case easier, particular case harder.
I'm sorry? What do you call the "generic case" here? All this list shows is that each operator needs to be implemented individually anyway. Andrei's point was exactly the reverse: he claims that most operators can be implemented in groups which clearly isn't the case here.
And I stand by that claim. One aspect that seems to have been forgotten is that types usually implement either op= in terms of op or vice versa. That savings alone is large. Andrei
Dec 30 2010
parent reply =?UTF-8?B?IkrDqXLDtG1lIE0uIEJlcmdlciI=?= <jeberger free.fr> writes:
Andrei Alexandrescu wrote:
 And I stand by that claim. One aspect that seems to have been forgotten=
 is that types usually implement either op=3D in terms of op or vice ver=
sa.
 That savings alone is large.
=20
This could have been done with a couple of stdlib mixins "generateOpsFromOpAssign" and "generateOpAssignsFromOp". Jerome --=20 mailto:jeberger free.fr http://jeberger.free.fr Jabber: jeberger jabber.fr
Dec 31 2010
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 12/31/10 7:30 AM, "Jérôme M. Berger" wrote:
 Andrei Alexandrescu wrote:
 And I stand by that claim. One aspect that seems to have been forgotten
 is that types usually implement either op= in terms of op or vice versa.
 That savings alone is large.
This could have been done with a couple of stdlib mixins "generateOpsFromOpAssign" and "generateOpAssignsFromOp".
The language definition would have stayed just as large. Andrei
Dec 31 2010
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 12/31/10 9:32 AM, Andrei Alexandrescu wrote:
 On 12/31/10 7:30 AM, "Jérôme M. Berger" wrote:
 Andrei Alexandrescu wrote:
 And I stand by that claim. One aspect that seems to have been forgotten
 is that types usually implement either op= in terms of op or vice versa.
 That savings alone is large.
This could have been done with a couple of stdlib mixins "generateOpsFromOpAssign" and "generateOpAssignsFromOp".
The language definition would have stayed just as large. Andrei
Besides, I feel a double standard here. Why are mixins bad for simplifying certain rarely-needed boilerplate, yet are just fine when they supplant a poor design? Andrei
Dec 31 2010
parent reply "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Fri, 31 Dec 2010 10:35:19 -0500, Andrei Alexandrescu  
<SeeWebsiteForEmail erdani.org> wrote:

 On 12/31/10 9:32 AM, Andrei Alexandrescu wrote:
 On 12/31/10 7:30 AM, "Jérôme M. Berger" wrote:
 Andrei Alexandrescu wrote:
 And I stand by that claim. One aspect that seems to have been  
 forgotten
 is that types usually implement either op= in terms of op or vice  
 versa.
 That savings alone is large.
This could have been done with a couple of stdlib mixins "generateOpsFromOpAssign" and "generateOpAssignsFromOp".
The language definition would have stayed just as large. Andrei
Besides, I feel a double standard here. Why are mixins bad for simplifying certain rarely-needed boilerplate, yet are just fine when they supplant a poor design?
Requiring mixins in any case looks like a poor design to me. Any time mixins are the answer, it raises significantly the bar for understanding not only how to write the code, but how to use it as well. Mixins are great for low-level things that can be abstracted away, but to make them part of your interface looks to me like we're back to C macros. Anyone trying to follow the code is going to have to jump through quite a few hoops to understand it. I think the point of Jerome is that the uncommon case of wanting to specify multiple operators with one template could have been solved with mixins (which would be abstracted as implementation details), and then the benefits we had with the old scheme (simple to understand and write, automatically virtual, allow covariance, etc.) would not be delegated to obscure library or compiler tricks. Where the old scheme breaks down is the whole opIndexAddAssignXYZ mess. It doesn't matter anyways, we have what we have. Let's just try and fix the blocker problems (such as no templates in interfaces) and see how we fare. -Steve
Dec 31 2010
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 12/31/10 9:47 AM, Steven Schveighoffer wrote:
 On Fri, 31 Dec 2010 10:35:19 -0500, Andrei Alexandrescu
 <SeeWebsiteForEmail erdani.org> wrote:

 On 12/31/10 9:32 AM, Andrei Alexandrescu wrote:
 On 12/31/10 7:30 AM, "Jérôme M. Berger" wrote:
 Andrei Alexandrescu wrote:
 And I stand by that claim. One aspect that seems to have been
 forgotten
 is that types usually implement either op= in terms of op or vice
 versa.
 That savings alone is large.
This could have been done with a couple of stdlib mixins "generateOpsFromOpAssign" and "generateOpAssignsFromOp".
The language definition would have stayed just as large. Andrei
Besides, I feel a double standard here. Why are mixins bad for simplifying certain rarely-needed boilerplate, yet are just fine when they supplant a poor design?
Requiring mixins in any case looks like a poor design to me. Any time mixins are the answer, it raises significantly the bar for understanding not only how to write the code, but how to use it as well. Mixins are great for low-level things that can be abstracted away, but to make them part of your interface looks to me like we're back to C macros. Anyone trying to follow the code is going to have to jump through quite a few hoops to understand it. I think the point of Jerome is that the uncommon case of wanting to specify multiple operators with one template
I thought I have clearly shown that that is the _common_ case. Andrei
Dec 31 2010
parent "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Fri, 31 Dec 2010 12:09:04 -0500, Andrei Alexandrescu  
<SeeWebsiteForEmail erdani.org> wrote:

 On 12/31/10 9:47 AM, Steven Schveighoffer wrote:
 On Fri, 31 Dec 2010 10:35:19 -0500, Andrei Alexandrescu
 <SeeWebsiteForEmail erdani.org> wrote:

 On 12/31/10 9:32 AM, Andrei Alexandrescu wrote:
 On 12/31/10 7:30 AM, "Jérôme M. Berger" wrote:
 Andrei Alexandrescu wrote:
 And I stand by that claim. One aspect that seems to have been
 forgotten
 is that types usually implement either op= in terms of op or vice
 versa.
 That savings alone is large.
This could have been done with a couple of stdlib mixins "generateOpsFromOpAssign" and "generateOpAssignsFromOp".
The language definition would have stayed just as large. Andrei
Besides, I feel a double standard here. Why are mixins bad for simplifying certain rarely-needed boilerplate, yet are just fine when they supplant a poor design?
Requiring mixins in any case looks like a poor design to me. Any time mixins are the answer, it raises significantly the bar for understanding not only how to write the code, but how to use it as well. Mixins are great for low-level things that can be abstracted away, but to make them part of your interface looks to me like we're back to C macros. Anyone trying to follow the code is going to have to jump through quite a few hoops to understand it. I think the point of Jerome is that the uncommon case of wanting to specify multiple operators with one template
I thought I have clearly shown that that is the _common_ case.
Depends on what you are doing. If you are writing numerical types for standard libraries, yes, it's common to have them, but you generally only write them once. I'd say it's more common to add one or two operators to custom types for syntax sugar than to implement all the math operators on lots of types. But that's just my point of view. Either view is probably too subjective to be "proof". -Steve
Dec 31 2010
prev sibling next sibling parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 12/30/10 3:36 PM, "Jérôme M. Berger" wrote:
 so wrote:
 On Thu, 30 Dec 2010 21:15:30 +0200, Jérôme M. Berger<jeberger free.fr>
 wrote:

 Andrei Alexandrescu wrote:
 Many good examples do prove a ton though. Just off the top of my head:

 - complex numbers
Multiplication and division are different from each other and from addition and subtraction.
 - checked integers
 - checked floating point numbers
 - ranged/constrained numbers
More or less the same case, so I'm not sure that they make three. Other than that agreed.
 - big int
 - big float
 - matrices and vectors
 - dimensional analysis (SI units)
 - rational numbers
 - fixed-point numbers
For all of those, multiplication and division are different from each other and from addition and subtraction. So what your examples do is actually prove *Steven's* point: most of the time, the code is not shared between operators. Jerome
First, most of these don't even have a division operator defined in math.
?? The only type in this list without a division operator is vector all the others have it.
 Second, you prove Andrei's point, not the other way around, since it
 makes the generic case easier, particular case harder.
I'm sorry? What do you call the "generic case" here? All this list shows is that each operator needs to be implemented individually anyway. Andrei's point was exactly the reverse: he claims that most operators can be implemented in groups which clearly isn't the case here.
Oh, not to mention opIndexXxx. In fact I remember that the breaking point for Walter (where he agreed to implement my design pronto) was the spectrum of having to define opIndexAddAssign etc. which effectively doubled the number of named operators AND is virtually ALWAYS implemented in a uniform manner. So let's not forget that the new design not only does what the D1 design does (and better), it also does things that the old design didn't, and in a scalable manner. Does this settle the argument? Andrei
Dec 30 2010
prev sibling parent reply so <so so.do> writes:
 	?? The only type in this list without a division operator is vector
 all the others have it.
Matrix matrix, matrix vector, vector matrix division also not defined, there is one syntactic similarity but it is not division. Didn't give much of a thought to others since vector, matrix and scalar operations takes already quite a bit space.
 	I'm sorry? What do you call the "generic case" here? All this list
 shows is that each operator needs to be implemented individually
 anyway. Andrei's point was exactly the reverse: he claims that most
 operators can be implemented in groups which clearly isn't the case
I don't agree, majority of the cases you duplicate almost all of the code and just change the operator. That was the thing i meant with "generic case". If i wasn't clear, say: vector opBinary(string op)(scalar s) if(op == "+" || op == "-" ....) { static if(op == "/") return general("*")(1/s); // particular case else return general(op)(s); // general case, just a one-liner mixin } vector opBinary(string op)(vector v) if(op == "+" || op == "-" ....) { return general(op)(v); // again, just a one-liner mixin } Same goes for matrix, particular case being the multiplication. -- Using Opera's revolutionary email client: http://www.opera.com/mail/
Dec 30 2010
parent "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Thu, 30 Dec 2010 17:03:05 -0500, so <so so.do> wrote:

 	?? The only type in this list without a division operator is vector
 all the others have it.
Matrix matrix, matrix vector, vector matrix division also not defined, there is one syntactic similarity but it is not division. Didn't give much of a thought to others since vector, matrix and scalar operations takes already quite a bit space.
 	I'm sorry? What do you call the "generic case" here? All this list
 shows is that each operator needs to be implemented individually
 anyway. Andrei's point was exactly the reverse: he claims that most
 operators can be implemented in groups which clearly isn't the case
I don't agree, majority of the cases you duplicate almost all of the code and just change the operator. That was the thing i meant with "generic case". If i wasn't clear, say: vector opBinary(string op)(scalar s) if(op == "+" || op == "-" ....) { static if(op == "/") return general("*")(1/s); // particular case else return general(op)(s); // general case, just a one-liner mixin } vector opBinary(string op)(vector v) if(op == "+" || op == "-" ....) { return general(op)(v); // again, just a one-liner mixin } Same goes for matrix, particular case being the multiplication.
Actually, that doesn't work currently. But I think this is a situation that can be fixed. Essentially, you can't overload templates. You have to do something like this instead: vector opBinary(string op, T)(T s) if(is(T == scalar) && (op == "+" || op == "-" ....)) { static if(op == "/") return general("*")(1/s); // particular case else return general(op)(s); // general case, just a one-liner mixin } vector opBinary(string op, T)(T v) if(is(T == vector) && (op == "+" || op == "-" ....)) { return general(op)(v); // again, just a one-liner mixin } So, it makes things difficult in this regard too, but I really hope this can be solved. It's already been stated in TDPL that templates will be able to overload with non-templates. I think this means they should overload with templates also. Note that these solutions may look simple and easy to you, but they look convoluted and messy to me ;) -Steve
Dec 30 2010
prev sibling parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 12/30/10 1:15 PM, "Jérôme M. Berger" wrote:
 Andrei Alexandrescu wrote:
 Many good examples do prove a ton though. Just off the top of my head:

 - complex numbers
Multiplication and division are different from each other and from addition and subtraction.
 - checked integers
 - checked floating point numbers
 - ranged/constrained numbers
More or less the same case, so I'm not sure that they make three. Other than that agreed.
 - big int
 - big float
 - matrices and vectors
 - dimensional analysis (SI units)
 - rational numbers
 - fixed-point numbers
For all of those, multiplication and division are different from each other and from addition and subtraction.
That's where the flexibility of grouping really helps: Let's also not forget about things such as unary + and -.
 	So what your examples do is actually prove *Steven's* point: most
 of the time, the code is not shared between operators.
I thought these examples effectively settle the matter. Andrei
Dec 30 2010
prev sibling parent reply "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Thu, 30 Dec 2010 12:52:32 -0500, Andrei Alexandrescu  
<SeeWebsiteForEmail erdani.org> wrote:

 On 12/30/10 11:08 AM, Steven Schveighoffer wrote:
 I'd have to see how it works. I also thought the new operator
 overloading scheme was reasonable -- until I tried to use it.
You mean until you tried to use it /once/.
 Note this is even more bloated because you generate one function per
 pair of types used in concatenation, vs. one function per class defined.
That function is inlined and vanishes out of existence. I wish one day we'd characterize this bloating issue more precisely. Right now anything generic has the "bloated!!" alarm stuck to it indiscriminately.
Functions inline out of existence during runtime, but the function itself remains resident in the compiled binary. I don't know that it's an important aspect to keep it in there or not, I just know it's kept. There are a whole slew of improvements we can make in this regard, but I'm not sure they are possible, because I'm not a compiler writer. One such nuisance in particular is the proliferation of types when you use something like isInputRange. That invariably is *only* used at compile time, yet the type and its typeinfo are injected into the binary.
 I mean bloated because you are generating template functions that just
 forward to other functions. Those functions are compiled in and take up
 space, even if they are inlined out.
I think we can safely leave this matter to compiler technology.
I hope that can be done. D already suffers from the 'hey what gives, how come hello world is 1MB?!!' syndrome.
 Let's also realize that the mixin is going to be required *per
 interface* and *per class*, meaning even more bloat.
The bloating argument is a complete red herring in this case. I do agree that generally it could be a concern and I also agree that the compiler needs to be improved in that regard. But by and large I think we can calmly and safely think that a simple short function is not a source of worry.
Short template functions still have template mangled names. I have found that template names with lots of parameters can slow down compilation, but I think Walter is working on fixing that. If we can get to a point where language constructs such as this can be truly inlined out of existence, then I think we will be on another level from other languages. It's not an unimportant nuisance to be dealt with later. And at that point, I can agree that the template solution is not bloated ;)
 So I'd say, while my example is not proof that this is a disaster, I
 think it shows the change in operator overloading cannot yet be declared
 a success. One good example does not prove anything just like one bad
 example does not prove anything.
Many good examples do prove a ton though. Just off the top of my head: - complex numbers - checked integers - checked floating point numbers - ranged/constrained numbers - big int - big float - matrices and vectors - dimensional analysis (SI units) - rational numbers - fixed-point numbers If I agree with something is that opCat is an oddity here as it doesn't usually group with others. Probably it would have helped if opCat would have been left named (just like opEquals or opCmp) but then uniformity has its advantages too. I don't think it's a disaster one way or another, but I do understand how opCat in particular is annoying to your case.
Probably the most common operator overload in D is opEquals, luckily that is not a template (even though it sadly does not work with interfaces yet). It seems that operator overloads are in categories. There are the numeric overloads, which I agree are generally overloaded in groups. When I defined cursors to be more like C++ iterators in dcollections instead of small ranges, I used the ++ and -- overloads, which you typically define together. When designing the mixin that allows you to define various operator overloads, I think it would be hugely beneficial to take into account these groupings and make the mixins modular.
 I haven't had that experience. This is just me talking. Maybe others
 believe it is good.

 I agree that the flexibility is good, I really think it should have that
 kind of flexibility. Especially when we start talking about the whole
 opAddAssign mess that was in D1. It also allows making wrapper types
 easier.

 The problem with flexibility is that it comes with complexity. Most
 programmers looking to understand how to overload operators in D are
 going to be daunted by having to use both templates and template
 constraints, and possibly mixins.
Most programmers looking to understand how to overload operators in D will need to bundle them (see the common case argument above) and will go with the TDPL examples, which are clear, short, simple, and useful.
The code itself is simple, it's the "how does x + y match up with this template thingy" which is the problem I think. We've already had several posts on d.learn ask how operator overloads work even after reading TDPL.
 There once was a discussion on how to improve operators on the phobos
 mailing list (don't have the history, because i think it was on
 erdani.com). Essentially, the two things were:

 1) let's make it possible to easily specify template constraints for
 typed parameters (such as string) like this:

 auto opBinary("+")(Foo other)

 which would look far less complex and verbose than the current
 incarnation. And simple to define when all you need is one or two
 operators.
I don't see this slight syntactic special case a net improvement over what we have.
It's less intimidating. Max pointed out opBinary(string op : "+"), which is close, but still has some seemingly superfluous syntax (why do I need string op? and what is that : for?) Compare that to C++ operators: operator+(Foo rhs) I'd call that very simple to understand in the context of operator overloading. Also, the proposal is a specialization of templates in general, not just for operator overloading. It translates to the same thing as if you wrote: opBinary(string $)(Foo other) if($ == "+") where $ is an inaccessible symbol. It basically optimizes out the parts you don't care about if you don't care about them.
 2) make template instantiations that provably evaluate to a single
 instance virtual. Or have a way to designate they should be virtual.
 e.g. the above operator syntax can only have one instantiation.
This may be worth exploring, but since template constraints are arbitrary expressions I fear it will become a mess of special cases designed to avoid the Turing tarpit.
That's why I conditioned it as "provably" evaluate to single instance. I meant provable by the compiler, so even something that may look obvious to a user as only instantiating to one instance may not be provable by the compiler. It probably requires you to use specific constructs (like the one mentioned above) to help the compiler out.
 Using operator overloading in conjunction with class inheritance is  
 rare.
I don't use operator overloads and class inheritance, but I do use operator overloads with interfaces. I think rare is not the right term, it's somewhat infrequent, but chances are if you do a lot of interfaces, you will encounter it at least once. It certainly doesn't dominate the API being defined.
Maybe a more appropriate characterization is that you use catenation with interfaces.
Concatenation, equality comparison, indexing, and assignment. Out of those, only concatenation gives me headaches because it requires templates. If you propose we remove concatenation from opBinary and give it its own form, then I thing that would solve the problem too.
 Actually, the functionality almost exists in template this parameters.
 At least, the reevaluation part is working. However, you still must
 incur a performance penalty to cast to the derived type, plus the
 template nature of it adds unnecessary bloat.
Saw that. I have a suspicion that we'll see a solid solution from you soon!
Alas, no solution is possible without templates being allowed in interfaces :( But yes, I plan to use this technique as soon as it's possible. -Steve
Dec 30 2010
parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 12/30/10 1:17 PM, Steven Schveighoffer wrote:
 On Thu, 30 Dec 2010 12:52:32 -0500, Andrei Alexandrescu
 <SeeWebsiteForEmail erdani.org> wrote:

 On 12/30/10 11:08 AM, Steven Schveighoffer wrote:
 I'd have to see how it works. I also thought the new operator
 overloading scheme was reasonable -- until I tried to use it.
You mean until you tried to use it /once/.
 Note this is even more bloated because you generate one function per
 pair of types used in concatenation, vs. one function per class defined.
That function is inlined and vanishes out of existence. I wish one day we'd characterize this bloating issue more precisely. Right now anything generic has the "bloated!!" alarm stuck to it indiscriminately.
Functions inline out of existence during runtime, but the function itself remains resident in the compiled binary.
Doesn't have to!
 I don't know that it's an important aspect to keep it in there or not, I
 just know it's kept. There are a whole slew of improvements we can make
 in this regard, but I'm not sure they are possible, because I'm not a
 compiler writer.
Please take my word for it: this is a solved problem.
 One such nuisance in particular is the proliferation of types when you
 use something like isInputRange. That invariably is *only* used at
 compile time, yet the type and its typeinfo are injected into the binary.

 I mean bloated because you are generating template functions that just
 forward to other functions. Those functions are compiled in and take up
 space, even if they are inlined out.
I think we can safely leave this matter to compiler technology.
I hope that can be done. D already suffers from the 'hey what gives, how come hello world is 1MB?!!' syndrome.
More like 600KB, but yah :o). Note that the size of the executable is caused by other issues, compared to which the concerns at hand are puny. [snip]
 So I'd say, while my example is not proof that this is a disaster, I
 think it shows the change in operator overloading cannot yet be declared
 a success. One good example does not prove anything just like one bad
 example does not prove anything.
Many good examples do prove a ton though. Just off the top of my head: - complex numbers - checked integers - checked floating point numbers - ranged/constrained numbers - big int - big float - matrices and vectors - dimensional analysis (SI units) - rational numbers - fixed-point numbers
One more: - Variant types
 If I agree with something is that opCat is an oddity here as it
 doesn't usually group with others. Probably it would have helped if
 opCat would have been left named (just like opEquals or opCmp) but
 then uniformity has its advantages too. I don't think it's a disaster
 one way or another, but I do understand how opCat in particular is
 annoying to your case.
Probably the most common operator overload in D is opEquals, luckily that is not a template (even though it sadly does not work with interfaces yet). It seems that operator overloads are in categories. There are the numeric overloads, which I agree are generally overloaded in groups. When I defined cursors to be more like C++ iterators in dcollections instead of small ranges, I used the ++ and -- overloads, which you typically define together. When designing the mixin that allows you to define various operator overloads, I think it would be hugely beneficial to take into account these groupings and make the mixins modular.
That's a valuable insight! Introspection can help a lot, e.g. you can synthesize opAdd from opAddAssign (or vice versa) etc.
 I haven't had that experience. This is just me talking. Maybe others
 believe it is good.

 I agree that the flexibility is good, I really think it should have that
 kind of flexibility. Especially when we start talking about the whole
 opAddAssign mess that was in D1. It also allows making wrapper types
 easier.

 The problem with flexibility is that it comes with complexity. Most
 programmers looking to understand how to overload operators in D are
 going to be daunted by having to use both templates and template
 constraints, and possibly mixins.
Most programmers looking to understand how to overload operators in D will need to bundle them (see the common case argument above) and will go with the TDPL examples, which are clear, short, simple, and useful.
The code itself is simple, it's the "how does x + y match up with this template thingy" which is the problem I think. We've already had several posts on d.learn ask how operator overloads work even after reading TDPL.
I'm not worried about this most at all as I think (unintentionally) things have fallen in the right place: operator overloading is an advanced, specialized topic. I believe the set of users who are sophisticated enough to sit down and start overloading operators at large, yet at the same time are beginners enough to not grasp the notion of a generic function in D (which is much simpler than in other languages) may as well be not empty, but is small enough to not cater for.
 Saw that. I have a suspicion that we'll see a solid solution from you
 soon!
Alas, no solution is possible without templates being allowed in interfaces :( But yes, I plan to use this technique as soon as it's possible.
I voted for it now, too. As I always use all of my 10 votes, I had asked on the Phobos list to increase that limit, but was thoroughly shredded into little pieces. Andrei
Dec 30 2010
prev sibling next sibling parent reply Max Samukha <spambox d-coding.com> writes:
On 12/30/2010 07:08 PM, Steven Schveighoffer wrote:

 auto opAdd(Foo other)

 vs.

 auto opBinary(string op)(Foo other) if (op == "+")
For the latter not to look so intimidating, it can be shortened to: auto opBinary(string op : "+")(Foo other)
Dec 30 2010
parent "Simen kjaeraas" <simen.kjaras gmail.com> writes:
On Thu, 30 Dec 2010 19:17:09 +0100, Max Samukha <spambox d-coding.com>  
wrote:

 On 12/30/2010 07:08 PM, Steven Schveighoffer wrote:

 auto opAdd(Foo other)

 vs.

 auto opBinary(string op)(Foo other) if (op == "+")
For the latter not to look so intimidating, it can be shortened to: auto opBinary(string op : "+")(Foo other)
Sadly, that shortcut does not allow for grouping. :( Perhaps we could get a specialization for that, though: auto opBinary( string op : "+" | "-" )( Foo other ) for instance. -- Simen
Dec 30 2010
prev sibling parent "Simen kjaeraas" <simen.kjaras gmail.com> writes:
Steven Schveighoffer <schveiguy yahoo.com> wrote:

 Where the new scheme wins in brevity (for written code at least, and  
 certainly not simpler to understand) is cases where:

 1. inheritance is not used
 2. you can consolidate many overloads into one function.
As others have pointed out, this likely accounts for 99% of uses. The one exception is opCat. -- Simen
Dec 30 2010
prev sibling parent reply bearophile <bearophileHUGS lycos.com> writes:
Walter:

 Ada is a failed language.
I agree that as general purpose language Ada is probably a failed language. But currently Ada is often the best language still if you have to write the autopilot software for an aeroplane or something that requires software with minimal bug counts. Even if Ada is globally a failed language, it's a large language composed of many features, so some of its feature may be good still. D and Ada share some purposes and semantics, they are similar languages. They are both large system languages fit for very high performance systems, they are both improvements over older simpler languages (C-family and Pascal-family), both have generics, and so on (there are some differences too, where D tries to add very flexible and general features, Ada often adds many different specialized features that are safer. Ada doesn't value code compactness much). The most important shared design goal of Ada and D is that both regard program correctness as very important. In this regard surely Ada tries harder than D. I like Python for certain kinds of programs, but if I go on a ship that uses an autopilot I'd like it to be written in a language safer than C. D advertises itself as a language that cares a lot about code correctness, but I am sure more is doable in this regard. Even if today Ada is sometimes the best language to write an autopilot, language, and two Microsoft researchers have released "Verve", a little experimental operating system kernel that has a nucleus written in typed high integrity software systems will be written like this instead of Ada (typed assembly is nicer than the normal inline D asm even when it's not formally verified, just verified by the type system, more or less like C code). Bye, bearophile
Dec 29 2010
parent reply Walter Bright <newshound2 digitalmars.com> writes:
bearophile wrote:
 Walter:
 
 Ada is a failed language.
I agree that as general purpose language Ada is probably a failed language. But currently Ada is often the best language still if you have to write the autopilot software for an aeroplane or something that requires software with minimal bug counts.
I don't believe that has been objectively demonstrated. It's a claim.
 Even if today Ada is sometimes the best language to write an autopilot,

 language, and two Microsoft researchers have released "Verve", a little
 experimental operating system kernel that has a nucleus written in typed

 high integrity software systems will be written like this instead of Ada
 (typed assembly is nicer than the normal inline D asm even when it's not
 formally verified, just verified by the type system, more or less like C
 code).
that, I was able to show within minutes that its contract proving feature is so security features, either. In other words, I think you should be careful reading feature lists and lists of claims put out by marketing departments. Whether a language feature delivers on its promises is only born out by years of experience in the field writing real software with real programmers. The only way you're going to actually find out what is causing problems in the field is to talk a lot with experienced programmers and their managers, and doing things like reading the bug lists on major projects and trying to figure out why those problems happened. I do have some experience with this, having worked at Boeing on flight critical designs for airliners. There are a lot of lessons there applicable to software development, and one lesson is that armchair gedanken experiments are no substitute for field experience in how mechanics actually put things together, what kinds of mistakes they are prone to making, etc.
Dec 29 2010
parent reply bearophile <bearophileHUGS lycos.com> writes:
Walter:

 I don't believe that has been objectively demonstrated. It's a claim.
Here at page 6 (Table 1) you see some bug frequencies for various languages, C, Ada, SPARK, etc: http://www.cs.virginia.edu/~jck/cs686/papers/andy.german.pdf (This is not a demonstration).

from that,
You and Andrei write D marketing all the time :-)
 I was able to show within minutes that its contract proving feature is so
 limited as to be effectively useless.
You have found that if you use a specific solver (I think S3, but other solvers code. Elsewhere I have read that bitwise operations are harder for such solvers. A design requirement of the Verve Kernel is to have some of its most important parts demonstrated as correct. So where the automatic solver is not able to do its work, you have to demonstrate the code manually (or with a semi-automatic solver). So the better the automatic solver is, the less work you have to do. But even if it's able to demonstrate only half of the code, you have saved lot of manual work. So it's not useless. You can't find an automated theorem prover able to demonstrate code in all cases.
 I do have some experience with this, having worked at Boeing on flight critical
 designs for airliners. There are a lot of lessons there applicable to software
 development, and one lesson is that armchair gedanken experiments are no
 substitute for field experience in how mechanics actually put things together,
 what kinds of mistakes they are prone to making, etc.
But informatics is not just engineering, it's a branch of mathematics too. It's important to study new nice armchair gedanken experiments, think about them, develop them. I have some experience with science and computer science, and I can tell those Microsoft researchers are doing some quite interesting computer science. Please don't make the digitalmars newsgroups too much academic-unfriendly :-) D2 is too much young to compensate for the loss of such people from its community. Bye, bearophile
Dec 30 2010
parent Walter Bright <newshound2 digitalmars.com> writes:
bearophile wrote:
 Walter:
 
 I don't believe that has been objectively demonstrated. It's a claim.
Here at page 6 (Table 1) you see some bug frequencies for various languages, C, Ada, SPARK, etc: http://www.cs.virginia.edu/~jck/cs686/papers/andy.german.pdf (This is not a demonstration).
Thank you. I think that looks like solid information, what I was asking for. I'd be curious how D stacks up under such analysis.

 from that,
You and Andrei write D marketing all the time :-)
Yes, we do.
 I was able to show within minutes that its contract proving feature is so 
 limited as to be effectively useless.
You have found that if you use a specific solver (I think S3, but other specific code. Elsewhere I have read that bitwise operations are harder for such solvers.
that is remotely ready for production use. It simply does not deliver on its promises.
 A design requirement of the Verve Kernel is to have some of its most
 important parts demonstrated as correct. So where the automatic solver is not
 able to do its work, you have to demonstrate the code manually (or with a
 semi-automatic solver). So the better the automatic solver is, the less work
 you have to do. But even if it's able to demonstrate only half of the code,
 you have saved lot of manual work. So it's not useless. You can't find an
 automated theorem prover able to demonstrate code in all cases.
It's another research project.
 I do have some experience with this, having worked at Boeing on flight
 critical designs for airliners. There are a lot of lessons there applicable
 to software development, and one lesson is that armchair gedanken
 experiments are no substitute for field experience in how mechanics
 actually put things together, what kinds of mistakes they are prone to
 making, etc.
But informatics is not just engineering, it's a branch of mathematics too. It's important to study new nice armchair gedanken experiments, think about them, develop them. I have some experience with science and computer science, and I can tell those Microsoft researchers are doing some quite interesting computer science. Please don't make the digitalmars newsgroups too much academic-unfriendly :-) D2 is too much young to compensate for the loss of such people from its community.
D isn't a research project. It is much like an airliner - you don't use an airliner as a research project. It's a workhorse. You use it to move passengers with safety and efficiency, using the best available proven technology. A lot of what D does has been proven effective in other languages, but I admit that the mix of features as expressed in D can have unexpected consequences, and only time & experience will tell if we got the recipe right or not.
Dec 30 2010