www.digitalmars.com         C & C++   DMDScript  

digitalmars.D.learn - Bug in D!!!

reply EntangledQuanta <EQ universe.com> writes:
This is quite surprising!

public struct S(T)
{
	T s;
}


interface I
{	
	void Go(T)(S!T s);

	static final I New()
	{
		return new C();
	}
}

abstract class A : I
{
	
}


class C : A
{
	void Go(T)(S!T s)
	{
		
	}
}


void main()
{
	S!int s;
	auto c = I.New();
	
	c.Go(s);    // fails!
	//(cast(C)c).Go(s);  // Works, only difference is we have made c 
an explicit C.
	
}

https://dpaste.dzfl.pl/dbc5a0663802

Everything works when Go is not templatized(we explicitly make T 
an int)


This is a blocker for me! Can someone open a ticket?
Aug 30
next sibling parent reply Kagamin <spam here.lot> writes:
It can't work this way. You can try std.variant.
Aug 30
parent EntangledQuanta <EQ universe.com> writes:
On Wednesday, 30 August 2017 at 21:13:19 UTC, Kagamin wrote:
 It can't work this way. You can try std.variant.
Sure it can! What are you talking about! std.variant has nothing to do with it! It works if T is hard coded, so it should work generically. What's the point of templates variables if they can't be used across inheritance? I could overload Go for each type and hence it should work. There is absolutely no reason why it can't work. Replace T with short, it works, replace T with anything and it works, hence it should work with T. If you are claiming that the compiler has to make a virtual function for each T, that is nonsense, I only need it for primitives, and there are a finite number of them. I could create overloads for short, int, double, float, etc but why? The whole point of templates is to solve that problem. Variants do not help. Openmethods can solve this problem too, but D should be more intelligent than simply writing off all normal use cases because someone thinks something can't be done. How many people thought it was impossible to go to the moon, yet it happened. Anyone can't deny anything, it's such a simple thing to do...
Aug 30
prev sibling next sibling parent reply Jonathan M Davis via Digitalmars-d-learn writes:
On Wednesday, August 30, 2017 20:47:12 EntangledQuanta via Digitalmars-d-
learn wrote:
 This is quite surprising!

 public struct S(T)
 {
   T s;
 }


 interface I
 {
   void Go(T)(S!T s);

   static final I New()
   {
       return new C();
   }
 }

 abstract class A : I
 {

 }


 class C : A
 {
   void Go(T)(S!T s)
   {

   }
 }


 void main()
 {
   S!int s;
   auto c = I.New();

   c.Go(s);    // fails!
   //(cast(C)c).Go(s);  // Works, only difference is we have made c
 an explicit C.

 }

 https://dpaste.dzfl.pl/dbc5a0663802

 Everything works when Go is not templatized(we explicitly make T
 an int)


 This is a blocker for me! Can someone open a ticket?
It is not possible to have a function be both virtual and templated. A function template generates a new function definition every time that it's a called with a new set of template arguments. So, the actual functions are not known up front, and that fundamentally does not work with virtual functions, where the functions need to be known up front, and you get a different function by a look-up for occurring in the virtual function call table for the class. Templates and virtual functions simply don't mix. You're going to have to come up with a solution that does not try and mix templates and virtual functions. - Jonathan M Davis
Aug 30
parent reply EntangledQuanta <EQ universe.com> writes:
On Wednesday, 30 August 2017 at 21:33:30 UTC, Jonathan M Davis 
wrote:
 On Wednesday, August 30, 2017 20:47:12 EntangledQuanta via 
 Digitalmars-d- learn wrote:
 This is quite surprising!

 public struct S(T)
 {
   T s;
 }


 interface I
 {
   void Go(T)(S!T s);

   static final I New()
   {
       return new C();
   }
 }

 abstract class A : I
 {

 }


 class C : A
 {
   void Go(T)(S!T s)
   {

   }
 }


 void main()
 {
   S!int s;
   auto c = I.New();

   c.Go(s);    // fails!
   //(cast(C)c).Go(s);  // Works, only difference is we have 
 made c
 an explicit C.

 }

 https://dpaste.dzfl.pl/dbc5a0663802

 Everything works when Go is not templatized(we explicitly make 
 T
 an int)


 This is a blocker for me! Can someone open a ticket?
It is not possible to have a function be both virtual and templated. A function template generates a new function definition every time that it's a called with a new set of template arguments. So, the actual functions are not known up front, and that fundamentally does not work with virtual functions, where the functions need to be known up front, and you get a different function by a look-up for occurring in the virtual function call table for the class. Templates and virtual functions simply don't mix. You're going to have to come up with a solution that does not try and mix templates and virtual functions. - Jonathan M Davis
I have a finite number of possible values of T, lets say 3. They are known at compile time, just because you are or D thinks they are not simply means you or D is not trying hard enough. So, saying that virtual methods and templates are not compatible is wrong. Just because you think they are or D thinks they are means you haven't thought about it hard enough. If I can overload a virtual function to get all my use cases and that is all I need then I **should** be able to do it with templates. Simple as that, if D can't do that then D needs to be enhanced to do so. e.g., class C { Go(Primitive!T)(T t); } The compiler can realize that T can only be a primitive, and generates all possible combinations of primitives, which is finite. This is doable, it is not impossible, regardless of what you think. It is equivalent to class C { Go(Primitive1 t); Go(Primitive2 t); ... Go(PrimitiveN t); } In fact, we can use string mixins to generate such code, but it doens't save us trouble, which is what templates are suppose to do in the first place. Just become someone hasn't implemented special cases does not mean it is theoretically impossible to do. A different syntax would be better interface I { Go(T in [float, double, int])(T t); } class C : I { Go(T in [float, double, int])(T t) { } } which the compiler "unrolls" to interface I { Go(float t); Go(double t); Go(int t); } class C { Go(float t) { } Go(double t) { } Go(int t) { } } Which, is standard D code. There is nothing wrong with specializing the most common cases. The point you are trying to making, and not doing a great job, is that the compiler cannot create an unknown set of virtual functions from a single templated virtual function. BUT, when you realize that is what the problem is, the unknown set is the issue NOT templated virtual functions. Make the set known and finite somehow then you have a solution, and it's not that difficult. Just requires some elbow grease. Primitives are obviously known at compile time so that is a doable special case. Although there will probably be quite a bit of wasted space since each primitive will have a function generated for it for each templated function, that really isn't an issue. By adding a new syntax in D, we could allow for any arbitrary(but known and finite) set to be used Go(T in [A,B,C])(T t) Where A,B,C are known types at compile time. This generates 3 functions and is doable. (should be simple for any D compiler genius to add for testing)
Aug 30
parent reply Jonathan M Davis via Digitalmars-d-learn writes:
On Wednesday, August 30, 2017 21:51:57 EntangledQuanta via Digitalmars-d-
learn wrote:
 The point you are trying to making, and not doing a great job, is
 that the compiler cannot create an unknown set of virtual
 functions from a single templated virtual function. BUT, when you
 realize that is what the problem is, the unknown set is the issue
 NOT templated virtual functions. Make the set known and finite
 somehow then you have a solution, and it's not that difficult.
 Just requires some elbow grease.
Templates have no idea what arguments you intend to use with them. You can pass them any arguments you want, and as long as they pass the template constraint, the compiler will attempt to instiate the template with those arguments - which may or may not compile, but the compiler doesn't care about that until you attempt to instantiate the template. The language does not support a mechanism for creating a templated function where you define ahead of time what all of the legal arguments are such that the compiler will just instantiate them all for you. The compiler only instantiates templates when the code instantiates them. Feel free to open up an enhancement request for some sort of template which has a specified list of arguments to be instantiated with which the compiler will then instantiate up front and allow no others, but that is not currently a language feature. The normal solution for something like that right now would be to explicitly declare each function that you want and then have them call a templated function in order to share the implementation. e.g. class C { public: auto foo(int i) { return _foo(i); } auto foo(float f) { return _foo(f); } auto foo(string s) { return _foo(s); } private: auto _foo(T)(T T) { ...} } - Jonathan M Davis
Aug 30
next sibling parent EntangledQuanta <EQ universe.com> writes:
On Wednesday, 30 August 2017 at 22:08:03 UTC, Jonathan M Davis 
wrote:
 On Wednesday, August 30, 2017 21:51:57 EntangledQuanta via 
 Digitalmars-d- learn wrote:
 The point you are trying to making, and not doing a great job, 
 is that the compiler cannot create an unknown set of virtual 
 functions from a single templated virtual function. BUT, when 
 you realize that is what the problem is, the unknown set is 
 the issue NOT templated virtual functions. Make the set known 
 and finite somehow then you have a solution, and it's not that 
 difficult. Just requires some elbow grease.
Templates have no idea what arguments you intend to use with them. You can pass them any arguments you want, and as long as they pass the template constraint, the compiler will attempt to instiate the template with those arguments - which may or may not compile, but the compiler doesn't care about that until you attempt to instantiate the template. The language does not support a mechanism for creating a templated function where you define ahead of time what all of the legal arguments are such that the compiler will just instantiate them all for you. The compiler only instantiates templates when the code instantiates them. Feel free to open up an enhancement request for some sort of template which has a specified list of arguments to be instantiated with which the compiler will then instantiate up front and allow no others, but that is not currently a language feature.
and my point is that it is not always the case that T can be anything. What if T is meant to only be algebraic? auto foo(T : Algebraic!(int, float, double))(T t){ } will the compiler be smart enough to be able to deduce that there are only 3 possibilities? No, but it should. (but of course, we don't want to use algebraic because that makes thing messy, and the whole point of all this is to reduce the mess) As far as a feature request, my guess is no one will care, I'd hope that wouldn't be the case, but seeming how much excitement in solving this problem has generated leads me to believe no one really cares about solving it.
 The normal solution for something like that right now would be 
 to explicitly declare each function that you want and then have 
 them call a templated function in order to share the 
 implementation. e.g.

 class C
 {
 public:

     auto foo(int i) { return _foo(i); }
     auto foo(float f) { return _foo(f); }
     auto foo(string s) { return _foo(s); }

 private:

     auto _foo(T)(T T) { ...}
 }

 - Jonathan M Davis
Yes, but this is really just explicit overloading. It doesn't solve the problem that templates are suppose to solve. When one starts overloading things, it becomes a bigger mess as each class needs to deal with the overloading and dispatching. It all could be solved with a bit of compiler "magic"(which should be quite simple). I mean, the compiler optimizes all kinds of things, this case shouldn't be any different. If it can determine a template parameter is reasonably finite then it should convert the templates method in to a series of overloaded methods for us... which is what you essentially did.
Aug 30
prev sibling parent EntangledQuanta <EQ universe.com> writes:
On Wednesday, 30 August 2017 at 22:08:03 UTC, Jonathan M Davis 
wrote:
 On Wednesday, August 30, 2017 21:51:57 EntangledQuanta via 
 Digitalmars-d- learn wrote:
 [...]
Templates have no idea what arguments you intend to use with them. You can pass them any arguments you want, and as long as they pass the template constraint, the compiler will attempt to instiate the template with those arguments - which may or may not compile, but the compiler doesn't care about that until you attempt to instantiate the template. [...]
I'm going to try to implement it as a library solution, something that basically does what you have done. This will at least simplify each instance to a few lines of code but would be required in all derived classes.
Aug 30
prev sibling next sibling parent lobo <swamplobo gmail.com> writes:
On Wednesday, 30 August 2017 at 20:47:12 UTC, EntangledQuanta 
wrote:
 This is quite surprising!

 public struct S(T)
 {
 	T s;
 }


 interface I
 {	
 	void Go(T)(S!T s);

 	static final I New()
 	{
 		return new C();
 	}
 }

 abstract class A : I
 {
 	
 }


 class C : A
 {
 	void Go(T)(S!T s)
 	{
 		
 	}
 }


 void main()
 {
 	S!int s;
 	auto c = I.New();
 	
 	c.Go(s);    // fails!
 	//(cast(C)c).Go(s);  // Works, only difference is we have made 
 c an explicit C.
 	
 }

 https://dpaste.dzfl.pl/dbc5a0663802

 Everything works when Go is not templatized(we explicitly make 
 T an int)


 This is a blocker for me! Can someone open a ticket?
Knock yourself out: https://issues.dlang.org/ Anyone can open tickets for bugs or enhancement requsts. bye, lobo
Aug 30
prev sibling next sibling parent reply Adam D. Ruppe <destructionator gmail.com> writes:
On Wednesday, 30 August 2017 at 20:47:12 UTC, EntangledQuanta 
wrote:
 This is quite surprising!
In the new version pending release (scheduled for later this week), we get a new feature `static foreach` that will let you loop through the types you want and declare all the functions that way. When it is released, we'll have to take a second look at this problem.
Aug 30
parent reply EntangledQuanta <EQ universe.com> writes:
On Wednesday, 30 August 2017 at 22:52:41 UTC, Adam D. Ruppe wrote:
 On Wednesday, 30 August 2017 at 20:47:12 UTC, EntangledQuanta 
 wrote:
 This is quite surprising!
In the new version pending release (scheduled for later this week), we get a new feature `static foreach` that will let you loop through the types you want and declare all the functions that way. When it is released, we'll have to take a second look at this problem.
I've already implemented a half ass library solution. It works, but is not robust. The compiler can and should do this! string OverLoadTemplateDefinition(string name, alias func, T...)() { import std.string; string str; foreach(t; T) str ~= ((func!t).stringof).replace("function(", name~"(")~";\n"; return str; } string OverLoadTemplateMethod(string name, alias func, T...)() { import std.traits, std.algorithm, std.meta, std.string; alias RT(S) = ReturnType!(func!S); alias PN(S) = ParameterIdentifierTuple!(func!S); alias P(S) = Parameters!(func!S); alias PD(S) = ParameterDefaults!(func!S); string str; foreach(t; T) { str ~= (RT!t).stringof~" "~name~"("; foreach(k,p; P!t) { auto d = ""; static if (PD!t[k].stringof != "void") d = " = "~(PD!t)[k].stringof; str ~= p.stringof~" "~(PN!t)[k]~d; if (k < (P!t).length - 1) str ~= ", "; } str ~= ") { _"~name~"("; foreach(k, n; PN!t) { str ~= n; if (k < (P!t).length - 1) str ~= ", "; } str ~= "); }\n"; } return str; } They are basically the generic version of what Jonathan implemented by hand. In the interface: private alias _Go(T) = void function(); mixin(OverLoadTemplateDefinition!("Go", _Go, int, short, float, double)()); In class: mixin(OverLoadTemplateMethod!("Go", _Go, int, short, float, double)()); protected final void _Go(T)() { .... } The alias simply defines the function that we are creating. The mixin OverLoadTemplateDefinition creates the N templates. in the class, we have to do something similar but dispatch them to the protected _Go... very similar to what Jonathan did by hand. But the code to do so is not robust and will break in many cases because I left a lot of details out(linkage, attributes, etc). It is a proof of concept, and as you can see, it is not difficult. The compiler, and anyone that has a decent understanding of the internals of it, should be able to implement something quite easily. Maybe it is also possible to use OpCall to do something similar? I'd like to reiterate that this is not an insolvable problem or an NP problem. It is quite easy. If we require restricting the types to a computable set, it is just simple overloading and templatizing to reduce the complexity. Having the compiler to this can reduce the noise and increase the robustness and also provide a nice feature that it currently does not have, but should. Using templates with inheritance is a good thing, It should be allowed instead of blinding preventing all cases when only one case is uncomputable. The logic that some are using is akin to "We can't divide by 0 so lets not divide at all", but of course, division is very useful and one pathological case doesn't prevent all other cases from being useful.
Aug 30
next sibling parent =?UTF-8?Q?Ali_=c3=87ehreli?= <acehreli yahoo.com> writes:
On 08/30/2017 05:49 PM, EntangledQuanta wrote:

 The compiler can and should do this!
Yes, the compiler can do it for each compilation but there is also the feature called /separate compilation/ that D supports. With separate compilation, there would potentially be multiple different and incompatible definitions of the same interface because compilations cannot know the whole set of instantiations of templates. This relatively popular request is impossible in D as well as other languages like C++ that practically settled on vtbl-based polymorphism. Ali
Aug 30
prev sibling parent reply Kagamin <spam here.lot> writes:
On Thursday, 31 August 2017 at 00:49:22 UTC, EntangledQuanta 
wrote:
 I've already implemented a half ass library solution.
It can be improved alot.
Aug 31
parent reply EntangledQuanta <EQ Universe.com> writes:
On Thursday, 31 August 2017 at 10:34:14 UTC, Kagamin wrote:
 On Thursday, 31 August 2017 at 00:49:22 UTC, EntangledQuanta 
 wrote:
 I've already implemented a half ass library solution.
It can be improved alot.
Then, by all means, genius!
Aug 31
parent Biotronic <simen.kjaras gmail.com> writes:
On Thursday, 31 August 2017 at 15:48:12 UTC, EntangledQuanta 
wrote:
 On Thursday, 31 August 2017 at 10:34:14 UTC, Kagamin wrote:
 On Thursday, 31 August 2017 at 00:49:22 UTC, EntangledQuanta 
 wrote:
 I've already implemented a half ass library solution.
It can be improved alot.
Then, by all means, genius!
Enjoy! mixin template virtualTemplates(alias parent, alias fn, T...) { import std.meta; alias name = Alias!(__traits(identifier, fn)[1..$]); mixin virtualTemplates!(parent, name, fn, T); } mixin template virtualTemplates(alias parent, string name, alias fn, T...) { import std.traits; static if (is(parent == interface)) { template templateOverloads(string name : name) { alias templateOverloads = T; } alias Types = T; } else { alias Types = templateOverloads!name; } mixin(virtualTemplatesImpl(name, Types.length, is(parent == class))); } string virtualTemplatesImpl(string name, int n, bool implement) { import std.format; string result; foreach (i; 0..n) { auto body = implement ? format(" { return fn!(Types[%s])(args); }", i) : ";"; result ~= format("ReturnType!(fn!(Types[%s])) %s(Parameters!(fn!(Types[%s])) args)%s\n", i, name, i, body); } return result; } interface I { void _Go(T)(T s); void _Leave(T)(T s); mixin virtualTemplates!(I, _Go, int, short, float, double); mixin virtualTemplates!(I, "Abscond", _Leave, int, short, float, double); } class C : I { void _Go(T)(T s) { } void _Leave(T)(T s) { } mixin virtualTemplates!(C, _Go); mixin virtualTemplates!(C, "Abscond", _Leave); } unittest { I c = new C(); c.Go(3.2); c.Abscond(3.4f); } Does not support multiple template parameters, or template value parameters. Use at own risk for any and all purposes.
Sep 01
prev sibling next sibling parent reply Adam D. Ruppe <destructionator gmail.com> writes:
static foreach is now in the new release! You can now do stuff 
like:

---
alias I(A...) = A;

interface Foo {
         static foreach(T; I!(int, float))
                 void set(T t); // define virt funcs for a list of 
types
}

class Ass : Foo {
         static foreach(T; I!(int, float))
                 void set(T t) {
                         // simplement
                 }
}
---


really easily.
Sep 01
parent reply EntangledQuanta <EQ universe.com> writes:
On Friday, 1 September 2017 at 15:24:39 UTC, Adam D. Ruppe wrote:
 static foreach is now in the new release! You can now do stuff 
 like:

 ---
 alias I(A...) = A;

 interface Foo {
         static foreach(T; I!(int, float))
                 void set(T t); // define virt funcs for a list 
 of types
 }

 class Ass : Foo {
         static foreach(T; I!(int, float))
                 void set(T t) {
                         // simplement
                 }
 }
 ---


 really easily.
I get an access violation, changed the code to import std.meta; static foreach(T; AliasSeq!("int", "float")) mixin("void set("~T~" t);"); and also get an access violation ;/
Sep 01
parent reply Adam D Ruppe <destructionator gmail.com> writes:
On Friday, 1 September 2017 at 18:17:22 UTC, EntangledQuanta 
wrote:
 I get an access violation, changed the code to
What is the rest of your code? access violation usually means you didn't new the class...
Sep 01
parent reply EntangledQuanta <EQ universe.com> writes:
On Friday, 1 September 2017 at 19:25:53 UTC, Adam D Ruppe wrote:
 On Friday, 1 September 2017 at 18:17:22 UTC, EntangledQuanta 
 wrote:
 I get an access violation, changed the code to
What is the rest of your code? access violation usually means you didn't new the class...
No, that is the code! I added nothing. Try it out and you'll see. I just upgraded to released dmd too. alias I(A...) = A; interface Foo { static foreach(T; I!(int, float)) void set(T t); // define virt funcs for a list of types } class Ass : Foo { static foreach(T; I!(int, float)) void set(T t) { // simplement } } void main() { } try it.
Sep 01
parent EntangledQuanta <EQ universe.com> writes:
This happens when building, not running. This might be a Visual D 
issue as when I use dmd from the command line, it works fine ;/
Sep 01
prev sibling next sibling parent reply Jesse Phillips <Jesse.K.Phillips+D gmail.com> writes:
I've love being able to inherit and override generic functions in 
C#. Unfortunately C# doesn't use templates and I hit so many 
other issues where Generics just suck.

I don't think it is appropriate to dismiss the need for the 
compiler to generate a virtual function for every instantiated T, 
after all, the compiler can't know you have a finite known set of 
T unless you tell it.

But lets assume we've told the compiler that it is compiling all 
the source code and it does not need to compile for future 
linking.

First the compiler will need to make sure all virtual functions 
can be generated for the derived classes. In this case the 
compiler must note the template function and validate all derived 
classes include it. That was easy.

Next up each instantiation of the function needs a new v-table 
entry in all derived classes. Current compiler implementation 
will compile each module independently of each other; so this 
feature could be specified to work within the same module or new 
semantics can be written up of how the compiler modifies already 
compiled modules and those which reference the compiled modules 
(the object sizes would be changing due to the v-table 
modifications)

With those three simple changes to the language I think that this 
feature will work for every T.
Sep 01
parent reply EntangledQuanta <EQ universe.com> writes:
On Friday, 1 September 2017 at 23:25:04 UTC, Jesse Phillips wrote:
 I've love being able to inherit and override generic functions 
 in C#. Unfortunately C# doesn't use templates and I hit so many 
 other issues where Generics just suck.

 I don't think it is appropriate to dismiss the need for the 
 compiler to generate a virtual function for every instantiated 
 T, after all, the compiler can't know you have a finite known 
 set of T unless you tell it.

 But lets assume we've told the compiler that it is compiling 
 all the source code and it does not need to compile for future 
 linking.

 First the compiler will need to make sure all virtual functions 
 can be generated for the derived classes. In this case the 
 compiler must note the template function and validate all 
 derived classes include it. That was easy.

 Next up each instantiation of the function needs a new v-table 
 entry in all derived classes. Current compiler implementation 
 will compile each module independently of each other; so this 
 feature could be specified to work within the same module or 
 new semantics can be written up of how the compiler modifies 
 already compiled modules and those which reference the compiled 
 modules (the object sizes would be changing due to the v-table 
 modifications)

 With those three simple changes to the language I think that 
 this feature will work for every T.
Specifying that there will be no further linkage is the same as making T finite. T must be finite. C# uses generics/IR/CLR so it can do things at run time that is effectively compile time for D. By simply extending the grammar slightly in an intuitive way, we can get the explicit finite case, which is easy: foo(T in [A,B,C])() and possibly for your case foo(T in <module>)() would work or foo(T in <program>)() the `in` keyword makes sense here and is not used nor ambiguous, I believe. Regardless of the implementation, the idea that we should throw the baby out with the bathwater is simply wrong. At least there are a few who get that. By looking in to it in a serious manner an event better solution might be found. Not looking at all results in no solutions and no progress.
Sep 01
next sibling parent reply Jesse Phillips <Jesse.K.Phillips+D gmail.com> writes:
On Saturday, 2 September 2017 at 00:00:43 UTC, EntangledQuanta 
wrote:
 Regardless of the implementation, the idea that we should throw 
 the baby out with the bathwater is simply wrong. At least there 
 are a few who get that. By looking in to it in a serious manner 
 an event better solution might be found. Not looking at all 
 results in no solutions and no progress.
Problem is that you didn't define the problem. You showed some code the compiler rejected and expressed that the compiler needed to figure it out. You did change it to having the compiler instantiate specified types, but that isn't defining the problem. You didn't like the code needed which would generate the functions and you hit a Visual D with the new static foreach. All of these are problems you could define, and you could have evaluated static foreach as a solution but instead stopped at problems with the tooling. You also don't appear to care about the complexity of the language. I expressed three required changes some of which may not play nicely with least surprise. You went straight to, we just need to define a syntax for that instead of expressing concern that the compiler will also need to handle errors to the use, such that the user understands that a feature they use is limited to very specific situations. Consider if you have a module defined interface, is that interface only available for use in that module? If not, how does a different model inherent the interface, does it need a different syntax. There is a lot more to a feature then having a way to express your desires. If your going to stick to a stance that it must exist and aren't going to accept there are problems with the request why expect others to work through the request.
Sep 02
parent EntangledQuanta <EQ universe.com> writes:
On Saturday, 2 September 2017 at 16:20:10 UTC, Jesse Phillips 
wrote:
 On Saturday, 2 September 2017 at 00:00:43 UTC, EntangledQuanta 
 wrote:
 Regardless of the implementation, the idea that we should 
 throw the baby out with the bathwater is simply wrong. At 
 least there are a few who get that. By looking in to it in a 
 serious manner an event better solution might be found. Not 
 looking at all results in no solutions and no progress.
Problem is that you didn't define the problem. You showed some code the compiler rejected and expressed that the compiler needed to figure it out. You did change it to having the compiler instantiate specified types, but that isn't defining the problem.
I think the problem is clearly defined, it's not my job to be a D compiler researcher and spell everything out for everyone else. Do I get paid for solving D's problems?
 You didn't like the code needed which would generate the 
 functions and you hit a Visual D with the new static foreach.
This sentence makes no sense. "hit a Visual D" what? Do you mean bug? If that is the case, how is that my fault? Amd I suppose to know off the bat that an access violation is caused by Visual D and not dmd when there is no info about the violation? Is it my fault that someone didn't code one of those tools good enough to express enough information for one to figure it out immediately?
 All of these are problems you could define, and you could have 
 evaluated static foreach as a solution but instead stopped at 
 problems with the tooling.
Huh? I think you fail to understand the real problem. The problem has nothing to do with tooling and I never said it did. The static foreach "solution" came after the fact when SEVERAL people(ok, 2) said it was an impossible task to do. That is where all this mess started. I then came up with a solution which proved that it is possible to do on some level, that is a solution to a problem that was defined, else the solution wouldn't exist.
 You also don't appear to care about the complexity of the 
 language. I expressed three required changes some of which may 
 not play nicely with least surprise. You went straight to, we 
 just need to define a syntax for that instead of expressing 
 concern that the compiler will also need to handle errors to 
 the use, such that the user understands that a feature they use 
 is limited to very specific situations.
Do you not understand that if a library solution exists then there is no real complexity added? It is called "lowering" by some. The compiler simply "rewrites" whatever new syntax is added in a form that the library solution realized. You are pretended, why?, that what I am proposed will somehow potentially affect every square micron of the D language and compiler, when it won't. Not all additions to a compiler are add *real* complexity. That is a failing of you and many on the D forums who resist change.
 Consider if you have a module defined interface, is that 
 interface only available for use in that module? If not, how 
 does a different model inherent the interface, does it need a 
 different syntax.
What does that have to do with this problem? We are not talking about interfaces. We are talking about something inside interfaces, so the problem about interfaces is irrelevant to this discussion because it applies to interfaces in general... interfaces that already exist and the problem exists regardless of what I
 There is a lot more to a feature then having a way to express 
 your desires. If your going to stick to a stance that it must 
 exist and aren't going to accept there are problems with the 
 request why expect others to work through the request.
No, your problem is your ego and your inability to interpret things outside of your own mental box. You should always keep in mind that you are interpreting someone elses mental wordage in your own way and it is not a perfect translation, in fact, we are lucky if 50% is interpreted properly. Now, if I do not have a right to express my desires, then at least state that, but I do have a right not to express any more than that. As far as motivating other people, that is isn't my job. I could care less actually. D is a hobby for me and I do it because I like the power D has, but D is the most frustrating language I have ever used. It's the most(hyperbole) buggy, most incomplete(good docs system: regardless of what the biased want to claim, tool, etc), most uninformative(errors that just toss the whole kitchen sink at you), etc. But I do have hope... which is the only reason I use it. Maybe I'm just an idiot and should go with the crowed, it would at least save me some frustration. C#, since you are familiar with it, you should know there is a huge difference. If D was like C# as far as the organizational structure(I do not mean MS, I mean the docs, library, etc) you would surely agree that D would most likely be the #1 language on this planet? C# has it's shit together. It is, for the most part, and elegant language that is well put together in almost every regard. It was thought out well and not hacked together the way D feels. The problem is that the D community doesn't seem to want to go a similar direction but go in circles. I think D will not progress much further in the next 10 years, if at all as far as improving it's self. The attitude of D programmers tends to be quite lame(it's a ragtag collection of individuals working in disparate means that only come together when there is a common need rather than a team working together for a higher focused purpose). First, you make up stuff as I never said anything about it *must existing* in D. Search the thread and you will see that you are the first one to use that phrase. Second, you fail to understand the difference between a theoretical discussion about what is possible and the practical of what is possible. Second, I am talking about the theoretical aspects of the *ability* to use virtual template function in D. I was told it is impossible, at least at first. Jonathan then came up with a hand written method where one uses a kludge to sort of do it. I then came up with a library solution that shows that it can be implemented and used with a few lines of code that enable such a feature(the two mixins). I also clarified the problem that by stating it is not an issue about virtual templated function but about the "size" of T(in which I do not mean the byte size but the space). With such a solution, it shows that a compiler can internally "add those lines"(effectively, which means that it will do whatever similar work it needs to do and so we can get a similar behavior without having to explicitly use the library solution to provide such functionality). Third, knowing that it is feasible opens the door and at least should pacify those that claim it is impossible. That is actually quite a lot on my part. I could have just shut up and let things be what they are and let the ignorance continue being ignorant. I put the foot in the door. But by doing that it opens up things for discussion about progression, which is what happened next PRECISELY because I pushed through the ignorance and put in the time to get the discussion going. Sure, I could have silently written up a dip and put 4 weeks of effort in it, forked dmd and implemented the code to show how it could be done, etc. But that is not my job, and considering how appreciative people around here are of compiler changes, dips, and advanced featuers, I'd expect it to a total waste of time. I have better things to do with my life than that. Given also the nature of the dmd community and the level of the tooling, docs, and such, I'm not going to invest my life it beyond planting seeds that maybe one day will sprout, but unlikely not because no one cares to water them. So, now we are at the "static foreach" solution that Adam added. I tried it, it looks nice but crashed as I did it. You seem to think I'm suppose to realize immediately with the only error "Access violtion: Object(0x34234)" is suppose to be a tooling problem. I guess I'm just not that smart. But eventually I did figure it out on my own and realized it was with visual D. But even that should be irrelevant as we are taking about a D feature. You then come along and add your 2c, and suggest a few specific issues and your thoughts about them. I then respond essentially agreeing with you but stating I still don't think it can be done for every T but offer a few syntaxes that might work in limiting T to being finite, which, is fundamentally the problem, regardless if you think it is or not and your statements are contradictory where you say "limit the linkage" and "all T". To spell it out: First you say "But lets assume we've told the compiler that it is compiling all the source code and it does not need to compile for future linking." Then at the end you say "With those three simple changes to the language I think that this feature will work for every T." Which are contraditory. Assuming we've told the compiler that no future linking is going to occur IS limiting T which means it won't work for EVERY T. By every T I mean every T in existence ever, regardless of any assumptions, rules, etc. If you meant every T in the source code, then yes, but you should have made that explicit since the problem innately depends on T being finite regardless of any implementation. You then basically attack me saying I should have done this and that and it's my fault for not stating the problem(which I did, clearly or not, or we wouldn't be at this point). I should think about the ramifications, etc. But I guess every day is different, right? Anyways, any library solution or kludge is not a solution in my book. The foreach method is no different than the mixin solution as far as added additional lines of code to a project that make the code less clear, less elegant, and less robust. You can make claims all day long that everything that can be implemented in a library should be. If that is the case many compiler features/all should be eliminated, in fact, maybe we should write in binary, as we can add everything to a library that we need? For some reason the DMD compiler and D language is treated like a golden calf that can't be changed. So much worry about adding complexity. If the design is so fragile that additional complexity or changes will potentially cause it to collapse, then it's not the features problems but D/dmd. At least state that case if it is. In that case it will collapse on it's own in due time regardless of what new stuff is added, patches can only take one so far. Anyways, I'm done with this conversation. I've shown light on a problem with D and shown that it has the potential to be solved, I am not going to be the one to solve it. If you want to waste many hours of your life on trying to get find a proper solution and get it accepted, by all means. I will use kludges as it gets me down the road... it doesn't make me happy, but who cares about happiness?
Sep 02
prev sibling parent reply Moritz Maxeiner <moritz ucworks.org> writes:
On Saturday, 2 September 2017 at 00:00:43 UTC, EntangledQuanta 
wrote:
 On Friday, 1 September 2017 at 23:25:04 UTC, Jesse Phillips 
 wrote:
 I've love being able to inherit and override generic functions 
 in C#. Unfortunately C# doesn't use templates and I hit so 
 many other issues where Generics just suck.

 I don't think it is appropriate to dismiss the need for the 
 compiler to generate a virtual function for every instantiated 
 T, after all, the compiler can't know you have a finite known 
 set of T unless you tell it.

 But lets assume we've told the compiler that it is compiling 
 all the source code and it does not need to compile for future 
 linking.

 First the compiler will need to make sure all virtual 
 functions can be generated for the derived classes. In this 
 case the compiler must note the template function and validate 
 all derived classes include it. That was easy.

 Next up each instantiation of the function needs a new v-table 
 entry in all derived classes. Current compiler implementation 
 will compile each module independently of each other; so this 
 feature could be specified to work within the same module or 
 new semantics can be written up of how the compiler modifies 
 already compiled modules and those which reference the 
 compiled modules (the object sizes would be changing due to 
 the v-table modifications)

 With those three simple changes to the language I think that 
 this feature will work for every T.
Specifying that there will be no further linkage is the same as making T finite. T must be finite. C# uses generics/IR/CLR so it can do things at run time that is effectively compile time for D. By simply extending the grammar slightly in an intuitive way, we can get the explicit finite case, which is easy: foo(T in [A,B,C])() and possibly for your case foo(T in <module>)() would work or foo(T in <program>)() the `in` keyword makes sense here and is not used nor ambiguous, I believe.
While I agree that `in` does make sense for the semantics involved, it is already used to do a failable key lookup (return pointer to value or null if not present) into an associative array [1] and input contracts. It wouldn't be ambiguous AFAICT, but having a keyword mean three different things depending on context would make the language even more complex (to read). W.r.t. to the idea in general: I think something like that could be valuable to have in the language, but since this essentially amounts to syntactic sugar (AFAICT), but I'm not (yet) convinced that with `static foreach` being included it's worth the cost. [1] https://dlang.org/spec/expression.html#InExpression
Sep 02
parent reply EntangledQuanta <EQ universe.com> writes:
On Saturday, 2 September 2017 at 21:19:31 UTC, Moritz Maxeiner 
wrote:
 On Saturday, 2 September 2017 at 00:00:43 UTC, EntangledQuanta 
 wrote:
 On Friday, 1 September 2017 at 23:25:04 UTC, Jesse Phillips 
 wrote:
 I've love being able to inherit and override generic 
 functions in C#. Unfortunately C# doesn't use templates and I 
 hit so many other issues where Generics just suck.

 I don't think it is appropriate to dismiss the need for the 
 compiler to generate a virtual function for every 
 instantiated T, after all, the compiler can't know you have a 
 finite known set of T unless you tell it.

 But lets assume we've told the compiler that it is compiling 
 all the source code and it does not need to compile for 
 future linking.

 First the compiler will need to make sure all virtual 
 functions can be generated for the derived classes. In this 
 case the compiler must note the template function and 
 validate all derived classes include it. That was easy.

 Next up each instantiation of the function needs a new 
 v-table entry in all derived classes. Current compiler 
 implementation will compile each module independently of each 
 other; so this feature could be specified to work within the 
 same module or new semantics can be written up of how the 
 compiler modifies already compiled modules and those which 
 reference the compiled modules (the object sizes would be 
 changing due to the v-table modifications)

 With those three simple changes to the language I think that 
 this feature will work for every T.
Specifying that there will be no further linkage is the same as making T finite. T must be finite. C# uses generics/IR/CLR so it can do things at run time that is effectively compile time for D. By simply extending the grammar slightly in an intuitive way, we can get the explicit finite case, which is easy: foo(T in [A,B,C])() and possibly for your case foo(T in <module>)() would work or foo(T in <program>)() the `in` keyword makes sense here and is not used nor ambiguous, I believe.
While I agree that `in` does make sense for the semantics involved, it is already used to do a failable key lookup (return pointer to value or null if not present) into an associative array [1] and input contracts. It wouldn't be ambiguous AFAICT, but having a keyword mean three different things depending on context would make the language even more complex (to read).
Yes, but they are independent, are they not? Maybe not. foo(T in Typelist)() in, as used here is not a input contract and completely independent. I suppose for arrays it could be ambiguous. For me, and this is just me, I do not find it ambiguous. I don't find different meanings ambiguous unless the context overlaps. Perceived ambiguity is not ambiguity, it's just ignorance... which can be overcome through learning. Hell, D has many cases where there are perceived ambiguities... as do most things. But in any case, I could care less about the exact syntax. It's just a suggestion that makes the most logical sense with regard to the standard usage of in. If it is truly unambiguous then it can be used. Another alternative is foo(T of Typelist) which, AFAIK, of is not used in D and even most programming languages. Another could be foo(T -> Typelist) or even foo(T from Typelist) or whatever. Doesn't really matter. They all mean the same to me once the definition has been written in stone. Could use `foo(T eifjasldj Typelist)` for all I care. The import thing for me is that such a simple syntax exists rather than the "complex syntax's" that have already been given(which are ultimately syntax's as everything is at the end of the day).
 W.r.t. to the idea in general: I think something like that 
 could be valuable to have in the language, but since this 
 essentially amounts to syntactic sugar (AFAICT), but I'm not 
 (yet) convinced that with `static foreach` being included it's 
 worth the cost.
Everything is syntactic sugar. So it isn't about if but how much. We are all coding in 0's and 1's whether we realize it or not. The point if syntax(or syntactic sugar) is to reduce the amount of 0's and 1's that we have to *effectively* code by grouping common patterns in to symbolic equivalents(by definition). This is all programming is. We define certain symbols to mean certain bit patterns, or generic bit matters(an if keyword/symbol is a generic bit pattern, a set of machine instructions(0's and 1's) and substitution placeholders that are eventually filled with 0's and 1's). No one can judge the usefulness of syntax until it has been created because what determines how useful something is is its use. But you can't use something if it doesn't exist. I think many fail to get that. The initial questions should be: Is there a gap in the language? (Yes in this case). Can the gap be filled? (this is a theoretical/mathematical question that has to be answered. Most people jump the gun here and make assumptions) Does the gap need to be filled? Yes in this case, because all gaps ultimately need to be filled, but this then leads the practical issues: Is the gap "large", how much work will it take to fill the gap? Will feeling that gap have utility? etc. These practical questions can only be dealt with once the theoretical can of "is it possible" is dealt with. I have shown it is possible(well, Jonathan gave a proof of concept first, I just implemented an automation for it). I think, at least several of us, should now be convinced that it is theoretically possible since several ways have been shown to be fruitful. We are now at where you have said you are not convinced if a new simpler syntax is warranted. The only real way to know is to implement that syntax experimentally, use it, then compare with the other methods and compare. But of course this is real work that most people are not willing to invest and so they approximate, as you have, an answer. I do not know, as you don't. We have our guesses derived from our experiences and our extrapolations. I can say, that in my case, it would only simplify my code by a few lines(and, of course, remove a library dependency, which I do not like anyways). What it mainly does is reduce kludges and being I'm the type of person that does not like kludges, makes me "happier". If you are ok with kludges, then it won't effect you as much. The only thing I can say are theoretical assertions and it is up for you to decide if they are worth your time to implement them(assuming you were the person). 1. Library solutions are always less desirable in the theoretical world. Ideally we would want a compiler that does everything and does it perfectly. Such an ideal may not be possible, but obviously compiler and language designers feel there is some amorphous ideal and history shows compilers tend to move towards that ideal. Libraries create dependencies on external code which have versioning issues, upkeep, etc. They are a middle ground solution between the practical and theoretical. But they are not something that should be "striven" for. Else, again, we should just write in binary and have everything implemented as a library solution. (which, once we do, we will realize we have a compiler) Library solutions also add complexity to the code itself. It is a trade off of compiler complexity vs user code complexity. The D community seems to love to push the complexity on the user. I feel this is partly do to those that deal with the compiler not really being coders(in the common sense of writing practical business applications for making $$$). For example, What has Walter actually coded as far as "practical stuff"? A video game? Did he even use D? This is not a jab at him, but my guess is that he is more of a mathematician rather than an engineer. You can't really do both and be great at them because there is only so much time in the day... even though they overlap greatly. When you get in to writing massive real world applications that span hundreds of developers, I'd bet D starts to fail miserably... of course, unless you are writing the next clone of pac man or some ad software. It's not that it can't be done, or that it can't be done well, but D starts showing it's weakness the more difficult the problem becomes(and it's strengths). You can write a simple command line utility in just about any language... it's not a test of the languages strengths and weaknesses. 2. Given the nature of the topic, which is virtual templated functions, which is a parallel of virtual functions, it seems IMO that it is more of a core concept that fits nicely with the other pieces. Those pieces are not implemented as a library solution(they could be, but then we are back to 1). Hence, it is not too much of a leap to think that adding this feature as a compiler solution is warranted. Since these are a simple extension of a compiler solution, it seems natural that the compiler should deal with it. If it were a library solution then it would be natural to extend the library... not mix and match, which is what is generally being suggested. Now, it's true that the suggested solutions are relatively straight forward. So, the issue is somewhat moot now. It wasn't, at least for me, when I asked... and given that several people quickly denied that such a solution(any) existed, is what made this thread much longer than it needed to be. I'd prefer a compiler solution... that is my opinion. Do what you will with it. It means nothing at the end of the day. If I had my own compiler I would have already implemented it in the compiler. If my compiler was so fragile that I could not add such a simple rewrite rule(which should be very simple extensions that introduce minimum complexity to the language or compiler), I'd either rewrite the compiler(fix it like it should) or move one to greener fields. Also keep in mind that what is complex to one person is not necessarily so to another. I just don't like to be *told*(not proven) that something is impossible when I very well know it is... it's really not about "liking" but the fact that those same people go and perpetuate their ignorance on other people. I can deal with it because I know better, but many people fall victim to such ignorance and it's one of the reasons why the world has so many problems as it does.
Sep 02
parent reply Moritz Maxeiner <moritz ucworks.org> writes:
On Saturday, 2 September 2017 at 23:12:35 UTC, EntangledQuanta 
wrote:
 On Saturday, 2 September 2017 at 21:19:31 UTC, Moritz Maxeiner 
 wrote:
 On Saturday, 2 September 2017 at 00:00:43 UTC, EntangledQuanta 
 wrote:
 On Friday, 1 September 2017 at 23:25:04 UTC, Jesse Phillips 
 wrote:
 I've love being able to inherit and override generic 
 functions in C#. Unfortunately C# doesn't use templates and 
 I hit so many other issues where Generics just suck.

 I don't think it is appropriate to dismiss the need for the 
 compiler to generate a virtual function for every 
 instantiated T, after all, the compiler can't know you have 
 a finite known set of T unless you tell it.

 But lets assume we've told the compiler that it is compiling 
 all the source code and it does not need to compile for 
 future linking.

 First the compiler will need to make sure all virtual 
 functions can be generated for the derived classes. In this 
 case the compiler must note the template function and 
 validate all derived classes include it. That was easy.

 Next up each instantiation of the function needs a new 
 v-table entry in all derived classes. Current compiler 
 implementation will compile each module independently of 
 each other; so this feature could be specified to work 
 within the same module or new semantics can be written up of 
 how the compiler modifies already compiled modules and those 
 which reference the compiled modules (the object sizes would 
 be changing due to the v-table modifications)

 With those three simple changes to the language I think that 
 this feature will work for every T.
Specifying that there will be no further linkage is the same as making T finite. T must be finite. C# uses generics/IR/CLR so it can do things at run time that is effectively compile time for D. By simply extending the grammar slightly in an intuitive way, we can get the explicit finite case, which is easy: foo(T in [A,B,C])() and possibly for your case foo(T in <module>)() would work or foo(T in <program>)() the `in` keyword makes sense here and is not used nor ambiguous, I believe.
While I agree that `in` does make sense for the semantics involved, it is already used to do a failable key lookup (return pointer to value or null if not present) into an associative array [1] and input contracts. It wouldn't be ambiguous AFAICT, but having a keyword mean three different things depending on context would make the language even more complex (to read).
Yes, but they are independent, are they not? Maybe not. foo(T in Typelist)() in, as used here is not a input contract and completely independent. I suppose for arrays it could be ambiguous.
The contexts being independent of each other doesn't change that we would still be overloading the same keyword with three vastly different meanings. Two is already bad enough imho (and if I had a good idea with what to replace the "in" for AA's I'd propose removing that meaning).
 For me, and this is just me, I do not find it ambiguous. I 
 don't find different meanings ambiguous unless the context 
 overlaps. Perceived ambiguity is not ambiguity, it's just 
 ignorance... which can be overcome through learning. Hell, D 
 has many cases where there are perceived ambiguities... as do 
 most things.
It's not about ambiguity for me, it's about readability. The more significantly different meanings you overload some keyword - or symbol, for that matter - with, the harder it becomes to read.
 But in any case, I could care less about the exact syntax. It's 
 just a suggestion that makes the most logical sense with regard 
 to the standard usage of in. If it is truly unambiguous then it 
 can be used.
Well, yes, as I wrote, I think it is unambiguous (and can thus be used), I just think it shouldn't be used.
 Another alternative is

 foo(T of Typelist)

 which, AFAIK, of is not used in D and even most programming 
 languages. Another could be

 foo(T -> Typelist)

 or even

 foo(T from Typelist)
I would much rather see it as a generalization of existing template specialization syntax [1], which this is t.b.h. just a superset of (current syntax allows limiting to exactly one, you propose limiting to 'n'): --- foo(T: char) // Existing syntax: Limit T to the single type `char` foo(T: (A, B, C)) // New syntax: Limit T to one of A, B, or C --- Strictly speaking, this is exactly what template specialization is for, it's just that the current one only supports a single type instead of a set of types. Looking at the grammar rules, upgrading it like this is a fairly small change, so the cost there should be minimal.
 or whatever. Doesn't really matter. They all mean the same to 
 me once the definition has been written in stone. Could use 
 `foo(T eifjasldj Typelist)` for all I care.
That's okay, but it does matter to me.
 The import thing for me is that such a simple syntax exists 
 rather than the "complex syntax's" that have already been 
 given(which are ultimately syntax's as everything is at the end 
 of the day).
Quoting a certain person (you know who you are) from DConf 2017: "Write a DIP". I'm quite happy to discuss this idea, but at the end of the day, as it's not an insignificant change to the language someone will to do the work and write a proposal.
 W.r.t. to the idea in general: I think something like that 
 could be valuable to have in the language, but since this 
 essentially amounts to syntactic sugar (AFAICT), but I'm not 
 (yet) convinced that with `static foreach` being included it's 
 worth the cost.
Everything is syntactic sugar. So it isn't about if but how much. We are all coding in 0's and 1's whether we realize it or not. The point if syntax(or syntactic sugar) is to reduce the amount of 0's and 1's that we have to *effectively* code by grouping common patterns in to symbolic equivalents(by definition).
AFAIK the difference between syntax sugar and enabling syntax in PLs usually comes down to the former allowing you to express concepts already representable by other constructs in the PL; when encountered, the syntax sugar could be lowered by the compiler to the more verbose syntax and still be both valid in the PL and recognizable as the concept (while this is vague, a prominent example would be lambdas in Java 8).
 No one can judge the usefulness of syntax until it has been 
 created because what determines how useful something is is its 
 use. But you can't use something if it doesn't exist. I think 
 many fail to get that.
Why do you think that? Less than ten people have participated in this thread so far.
 The initial questions should be: Is there a gap in the 
 language? (Yes in this case). Can the gap be filled? (this is a 
 theoretical/mathematical question that has to be answered.
 Most people jump the gun here and make assumptions)
Why do you assume that? I've not seen anyone here claiming template parameter specialization to one of n types (which is the idea I replied to) couldn't be done in theory, only that it can't be done right now (the only claim as to that it can't be done I noticed was w.r.t. (unspecialized) templates and virtual functions, which is correct due to D supporting separate compilation; specialized templates, however, should work in theory).
 Does the gap need to be filled? Yes in this case, because all 
 gaps ultimately need to be filled, but this then leads the 
 practical issues:
Actually, I disagree here. It only *needs* filling if enough users of the language actually care about it not being there. Otherwise, it's a *nice to have* (like generics and Go, or memory safety and C :p ). [1] https://dlang.org/spec/template.html#parameters_specialization
Sep 02
parent reply EntangledQuanta <EQ universe.com> writes:
On Sunday, 3 September 2017 at 02:39:19 UTC, Moritz Maxeiner 
wrote:
 On Saturday, 2 September 2017 at 23:12:35 UTC, EntangledQuanta 
 wrote:
 On Saturday, 2 September 2017 at 21:19:31 UTC, Moritz Maxeiner 
 wrote:
 On Saturday, 2 September 2017 at 00:00:43 UTC, 
 EntangledQuanta wrote:
 On Friday, 1 September 2017 at 23:25:04 UTC, Jesse Phillips 
 wrote:
 I've love being able to inherit and override generic 
 functions in C#. Unfortunately C# doesn't use templates and 
 I hit so many other issues where Generics just suck.

 I don't think it is appropriate to dismiss the need for the 
 compiler to generate a virtual function for every 
 instantiated T, after all, the compiler can't know you have 
 a finite known set of T unless you tell it.

 But lets assume we've told the compiler that it is 
 compiling all the source code and it does not need to 
 compile for future linking.

 First the compiler will need to make sure all virtual 
 functions can be generated for the derived classes. In this 
 case the compiler must note the template function and 
 validate all derived classes include it. That was easy.

 Next up each instantiation of the function needs a new 
 v-table entry in all derived classes. Current compiler 
 implementation will compile each module independently of 
 each other; so this feature could be specified to work 
 within the same module or new semantics can be written up 
 of how the compiler modifies already compiled modules and 
 those which reference the compiled modules (the object 
 sizes would be changing due to the v-table modifications)

 With those three simple changes to the language I think 
 that this feature will work for every T.
Specifying that there will be no further linkage is the same as making T finite. T must be finite. C# uses generics/IR/CLR so it can do things at run time that is effectively compile time for D. By simply extending the grammar slightly in an intuitive way, we can get the explicit finite case, which is easy: foo(T in [A,B,C])() and possibly for your case foo(T in <module>)() would work or foo(T in <program>)() the `in` keyword makes sense here and is not used nor ambiguous, I believe.
While I agree that `in` does make sense for the semantics involved, it is already used to do a failable key lookup (return pointer to value or null if not present) into an associative array [1] and input contracts. It wouldn't be ambiguous AFAICT, but having a keyword mean three different things depending on context would make the language even more complex (to read).
Yes, but they are independent, are they not? Maybe not. foo(T in Typelist)() in, as used here is not a input contract and completely independent. I suppose for arrays it could be ambiguous.
The contexts being independent of each other doesn't change that we would still be overloading the same keyword with three vastly different meanings. Two is already bad enough imho (and if I had a good idea with what to replace the "in" for AA's I'd propose removing that meaning).
Why? Don't you realize that the contexts matters and it's what separates the meaning? In truly unambiguous contexts, it shouldn't matter. It may require one to decipher the context, which takes time, but there is nothing inherently wrong with it and we are limited to how many symbols we use(unfortunately we are generally stuck with the querty keyboard design, else we could use symbols out the ying yang and make things much clearer, but even mathematics, which is a near perfect language, "overloads" symbols meanings). You have to do this sort of thing when you limit the number of keywords you use. Again, ultimately it doesn't matter. A symbol is just a symbol. For me, as long as the context is clear, I don't see what kind of harm it can cause. You say it is bad, but you don't give the reasons why it is bad. If you like to think of `in` has having only one definition then the question is why? You are limiting yourself. The natural languages are abound with such multi-definitions. Usually in an ambiguous way and it can cause a lot of problems, but for computer languages, it can't(else we couldn't actually compile the programs). Context sensitive grammars are provably more expressive than context free. https://en.wikipedia.org/wiki/Context-sensitive_grammar Again, I'm not necessarily arguing for them, just saying that one shouldn't avoid them just to avoid them.
 For me, and this is just me, I do not find it ambiguous. I 
 don't find different meanings ambiguous unless the context 
 overlaps. Perceived ambiguity is not ambiguity, it's just 
 ignorance... which can be overcome through learning. Hell, D 
 has many cases where there are perceived ambiguities... as do 
 most things.
It's not about ambiguity for me, it's about readability. The more significantly different meanings you overload some keyword - or symbol, for that matter - with, the harder it becomes to read.
I don't think that is true. Everything is hard to read. It's about experience. The more you experience something the more clear it becomes. Only with true ambiguity is something impossible. I realize that in one can design a language to be hard to parse due to apparent ambiguities, but am I am talking about cases where they can be resolved immediately(at most a few milliseconds). You are making general statements, and it is not that I disagree, but it depends on context(everything does). In this specific case, I think it is extremely clear what in means, so it is effectively like using a different token. Again, everyone is different though and have different experiences that help them parse things more naturally. I'm sure there are things that you might find easy that I would find hard. But that shouldn't stop me from learning about them. It makes me "smarter", to simplify the discussion.
 But in any case, I could care less about the exact syntax. 
 It's just a suggestion that makes the most logical sense with 
 regard to the standard usage of in. If it is truly unambiguous 
 then it can be used.
Well, yes, as I wrote, I think it is unambiguous (and can thus be used), I just think it shouldn't be used.
Yes, but you have only given the reason that it shouldn't be used because you believe that one shouldn't overload keywords because it makes it harder to parse the meaning. My rebuttal, as I have said, is that it is not harder, so your argument is not valid. All you could do is claim that it is hard and we would have to find out who is more right. I have a logical argument against your absolute restriction though... in that it causes one to have to use more symbols. I would imagine you are against stuff like using "in1", "in2", etc because they visibly are to close to each other. If you want "maximum" readability you are going to have to mathematically define that in a precise way then could up with a grammar that expresses it. I think you'll find that the grammar will depend on each individual person. At best you could then take an average which satisfies most people up to some threshold... in which case, at some point in time later, that average will shift and your grammar will no longer be valid(it will no longer satisfy the average). Again, it's not that I completely disagree with you on a practical level. Lines have to be drawn, but it's about where to precisely draw that line. Drawing it in the wrong place leads to certain solutions that are generally problematic. That's how we know they are wrong, we draw a line and later realize it cause a bunch of problems then we say "oh that was the wrong way to do it". Only with drawing a bunch of wrong lines can we determine which ones are the best and use that info to predict better locations.
 Another alternative is

 foo(T of Typelist)

 which, AFAIK, of is not used in D and even most programming 
 languages. Another could be

 foo(T -> Typelist)

 or even

 foo(T from Typelist)
I would much rather see it as a generalization of existing template specialization syntax [1], which this is t.b.h. just a superset of (current syntax allows limiting to exactly one, you propose limiting to 'n'): --- foo(T: char) // Existing syntax: Limit T to the single type `char` foo(T: (A, B, C)) // New syntax: Limit T to one of A, B, or C ---
Yes, if this worked, I'd be fine with it. Again, I could care less. `:` == `in` for me as long as `:` has the correct meaning of "can be one of the following" or whatever. But AFAIK, : is not "can be one of the following"(which is "in" or "element of" in the mathematical sense) but can also mean "is a derived type of". All I'm after is the capability to do something elegantly, and when it doesn't exist, I "crave" that it does. I don't really care how it is done(but remember, it must be done elegantly). I am not "confused"(or whatever you want to call it) by symbolic notation. As long as it's clearly defined so I can learn the definition and it is not ambiguous. There are all kinds of symbols that can be used, again, we are limited by querty(for speed, no one wants to have to use alt-codes in programming, there is a way around this but would scare most people). e.g., T ∈ X is another expression(more mathematical, I assume that ∈ will be displayed correctly, it is Alt +2208) that could work, but ∈ is not ascii an so can't be used(not because it can't but because of peoples lack of will to progress out of the dark ages).
 Strictly speaking, this is exactly what template specialization 
 is for, it's just that the current one only supports a single 
 type instead of a set of types.
 Looking at the grammar rules, upgrading it like this is a 
 fairly small change, so the cost there should be minimal.
If that is the case then go for it ;) It is not a concern of mine. You tell me the syntax and I will use it. (I'd have no choice, of course, but if it's short and sweet then I won't have any problem). The main reason I suggest syntax is because none exist and I assume, maybe wrongly, that people will get what I am saying easier than writing up some example library solution and demonstrating that. if I say something like class/struct { foo(T ∈ X)(); } defines a virtual template function for all T in X. Which is equivalent to class/struct { foo(X1)(); ... foo(Xn)(); } I assume that most people will understand, more or less the notation I used to be able to interpret what am trying to get at. It is a mix of psuedo-programming and mathematics, but it is not complex. ∈ might be a bit confusing but looking it up and learning about it will educate those that want to be educated and expand everyones ability to communicate better. I could, of course, be more precise, but I try to be precise only when it suits me(which may be fault, but, again, I only have so many hours in the day to do stuff).
 or whatever. Doesn't really matter. They all mean the same to 
 me once the definition has been written in stone. Could use 
 `foo(T eifjasldj Typelist)` for all I care.
That's okay, but it does matter to me.
That's fine. I am willing to compromise. Lucky for you, symbols/tokens and context are not a big deal to me. Of course, I do like short and sweet, so I am biased too, but I have much more leeway it seems.
 The import thing for me is that such a simple syntax exists 
 rather than the "complex syntax's" that have already been 
 given(which are ultimately syntax's as everything is at the 
 end of the day).
Quoting a certain person (you know who you are) from DConf 2017: "Write a DIP". I'm quite happy to discuss this idea, but at the end of the day, as it's not an insignificant change to the language someone will to do the work and write a proposal.
My main issues with going through the trouble is that basically I have more important things to do. If I were going to try to get D to do all the changes I actually wanted, I'd be better off writing my own language the way I envision it and want it... but I don't have 10+ years to invest in such a beast and to do it right would require my full attention, which I'm not willing to give, because again, I have better things to do(things I really enjoy). So, all I can do is hopefully stoke the fire enough to get someone else interested in the feature and have them do the work. If they don't, then they don't, that is fine. But I feel like I've done something to try to right a wrong.
 W.r.t. to the idea in general: I think something like that 
 could be valuable to have in the language, but since this 
 essentially amounts to syntactic sugar (AFAICT), but I'm not 
 (yet) convinced that with `static foreach` being included 
 it's worth the cost.
Everything is syntactic sugar. So it isn't about if but how much. We are all coding in 0's and 1's whether we realize it or not. The point if syntax(or syntactic sugar) is to reduce the amount of 0's and 1's that we have to *effectively* code by grouping common patterns in to symbolic equivalents(by definition).
AFAIK the difference between syntax sugar and enabling syntax in PLs usually comes down to the former allowing you to express concepts already representable by other constructs in the PL; when encountered, the syntax sugar could be lowered by the compiler to the more verbose syntax and still be both valid in the PL and recognizable as the concept (while this is vague, a prominent example would be lambdas in Java 8).
Yes, but everything is "lowered" it's just how you define it. It is all lowering to 0's and 1's. Syntactic sugar is colloquially used like you have defined it, but in the limit(the most general sense), it's just stuff. Why? Because what is sugar to one person is salt to another(this is hyperbole, of course, but you should be able to get my point). e.g., You could define syntactic sugar to be enhancement that can be directly rewritten in to a currently expressible syntax in the language. That is fine. But then what if that expressible syntax was also syntactic sugar? You end up with something like L(L(L(L(x)))) where L is a "lowering" and x is something that is not "lowered". But if you actually were able to trace the evolution of the compiler, You'd surely notice that x is just L(...L(y)...) for some y. A programming language is simply something that takes a set of bits and transforms them to another set of bits. No more and no less. Everything else is "syntactic sugar". The definition may be so general as to be useless, but it is what a programming language is(mathematically at least). Think about it a bit. How did programmers program before modern compilers came along? They used punch cards or levers, which are basically setting "bits" or various "function"(behaviors) that the machine would carry out. Certain functions and combinations of functions were deemed more useful and were combined in to "meta-functions" and given special bits to represent them. This process has been carried out ad-nauseam and we are were we are today because of this process(fundamentally) But the point is, at each step, someone can claim that the current "simplifying" of complex functions in to a "meta-function" just "syntactic sugar". This process though is actually what creates the "power" in things. Same thing happens at the hardware level... same thing happens with atoms and molecules(except we are not in control of the rules of how those things combine).
 No one can judge the usefulness of syntax until it has been 
 created because what determines how useful something is is its 
 use. But you can't use something if it doesn't exist. I think 
 many fail to get that.
Why do you think that? Less than ten people have participated in this thread so far.
I am not talking about just this thread, I am talking about in all threads and all things in which humans attempt to determine the use of something. e.g., the use of computers(used to be completely useless for most people because they failed to see the use in it(it wasn't useful to them)). The use of medicine... the use of a new born baby, the use of life. The use of a turtle. People judge use in terms of what it does for them on a "personal" level, and my point is, that this inability to see the use of something in an absolute sense(how useful is it to the whole, be it the whole of the D programming community, the whole of humanity, the whole of life, or whatever) is a sever shortcoming of almost all humans. It didn't creep up too much in this thread but I have definitely see in it other threads. Most first say "Well, hell, that won't help me, that is useless". They forget that it may be useless to them at that moment, but might be useful to them and might be useful to other people. Why something is useless to someone, though, almost entirely depends on their use of it. You can't know how useful something is until you use it... and this is why so many people judge the use of something the way they do(They can't help it, it's sort of law of the universe). Let me explain, as it might not be clear: Many people many years ago used to think X was useless. Today, those same people cannot live without X. Replace X with just about anything(computers, music, oop, etc). But if you asked those people back then they would have told you those things are useless. But through whatever means(the way life is) things change and things that were previously useless become useful. They didn't know that at first because they didn't use those things to find out if they were useful. The same logic SHOULD be applied to everything. We don't know how useful something is until we use it *enough* to determine if it is useful. But this is not the logic most people use, including many people in the D community. They first judge, and almost exclusively(depends on the person), how it relates to their own person self. This is fundamentally wrong IMO and, while I don't have mathematical proof, I do have a lot of experience that tells me so(history being a good friend).
 The initial questions should be: Is there a gap in the 
 language? (Yes in this case). Can the gap be filled? (this is 
 a theoretical/mathematical question that has to be answered.
 Most people jump the gun here and make assumptions)
Why do you assume that? I've not seen anyone here claiming template parameter specialization to one of n types (which is the idea I replied to) couldn't be done in theory, only that it can't be done right now (the only claim as to that it can't be done I noticed was w.r.t. (unspecialized) templates and virtual functions, which is correct due to D supporting separate compilation; specialized templates, however, should work in theory).
Let me quote the first two responses: "It can't work this way. You can try std.variant." and "It is not possible to have a function be both virtual and templated. A function template generates a new function definition every time that it's a called with a new set of template arguments. So, the actual functions are not known up front, and that fundamentally does not work with virtual functions, where the functions need to be known up front, and you get a different function by a look-up for occurring in the virtual function call table for the class. Templates and virtual functions simply don't mix. You're going to have to come up with a solution that does not try and mix templates and virtual functions." Now, I realize I might have no been clear about things and maybe there is confusion/ambiguity in what I meant, how they interpreted it, or how I interpreted their response... but there is definitely no sense of "Yes, we can make this work in some way..." type of mentality. e.g., "Templates and virtual functions simply don't mix." That is an absolute statement. It isn't even qualified with "in D".
 Does the gap need to be filled? Yes in this case, because all 
 gaps ultimately need to be filled, but this then leads the 
 practical issues:
Actually, I disagree here. It only *needs* filling if enough users of the language actually care about it not being there. Otherwise, it's a *nice to have* (like generics and Go, or memory safety and C :p ).
Yes, on some level you are right... but again, who's to judge? the current users or the future users? You have to take in to account the future users if you care about the future of D, because those will be the users of it and so the current users actually have only a certain percentage of weight. Also, who will be more informed about the capabilities and useful features of D? The current users or the future users? Surely when you first started using D, you were ignorant of many of the pro's and con's of D. Your future self(in regard to that time period when you first started using D) new a lot more about it? ie., you know more now than you did, and you will know more in the future than you do now. The great thing about knowledge it grows with time when watered. You stuck around with D, learned it each "day" and became more knowledgeable about it. At the time, there were people making decisions about the future of D features, and now you get to experience them and determine their usefulness PRECISELY because of those people in the past filling in the gaps. EVERYTHING that D currently has it didn't have in the past. Hence, someone had to create it(Dip or no dip)... thank god they did, or D would just be a pimple on Walters brain. But D can't progress any further unless the same principles are applied. Sure it is more bulky(complex) and sure not everything has to be implemented in the compiler to make progress... But the only way we can truly know what we should do is first to do things we think are correct(and don't do things we know are wrong). So, when people say "this can't be done" and I know it damn well can, I will throw a little tantrum... maybe they will give me a cookie, who knows? Sure, I could be wrong... but I could also be right(just as much as they could be wrong or be right). This is why we talk about things, to combine our experiences and ideas to figure out how well something will work. The main problem I see, in the D community, is that very little cooperation is done in those regards unless it's initiated by the core team(that isn't a bad thing in some sense but it isn't a good thing in another sense). I guess some people just haven't learned the old proverb "Where there's a will, there's a way".
 [1] 
 https://dlang.org/spec/template.html#parameters_specialization
As I mentioned, and I'm unclear if it : behaves exactly that way or not, but : seems to do more than be inclusive. If it's current meaning can still work with virtual templated functions, then I think it would be even better. But ultimately all this would have to be fleshed out properly before any real work could be done.
Sep 02
parent reply Moritz Maxeiner <moritz ucworks.org> writes:
On Sunday, 3 September 2017 at 04:18:03 UTC, EntangledQuanta 
wrote:
 On Sunday, 3 September 2017 at 02:39:19 UTC, Moritz Maxeiner 
 wrote:
 On Saturday, 2 September 2017 at 23:12:35 UTC, EntangledQuanta 
 wrote:
 [...]
The contexts being independent of each other doesn't change that we would still be overloading the same keyword with three vastly different meanings. Two is already bad enough imho (and if I had a good idea with what to replace the "in" for AA's I'd propose removing that meaning).
Why? Don't you realize that the contexts matters and [...]
Because instead of seeing the keyword and knowing its one meaning you also have to consider the context it appears in. That is intrinsically more work (though the difference may be very small) and thus harder.
 Again, I'm not necessarily arguing for them, just saying that 
 one shouldn't avoid them just to avoid them.


 [...]
It's not about ambiguity for me, it's about readability. The more significantly different meanings you overload some keyword - or symbol, for that matter - with, the harder it becomes to read.
I don't think that is true. Everything is hard to read. It's about experience. The more you experience something the more clear it becomes. Only with true ambiguity is something impossible. I realize that in one can design a language to be hard to parse due to apparent ambiguities, but am I am talking about cases where they can be resolved immediately(at most a few milliseconds).
Experience helps, of course, but it doesn't change that it's still just that little bit slower. And everytime we encourage such overloading encourages more, which in the end sums up.
 You are making general statements, and it is not that I 
 disagree, but it depends on context(everything does). In this 
 specific case, I think it is extremely clear what in means, so 
 it is effectively like using a different token. Again, everyone 
 is different though and have different experiences that help 
 them parse things more naturally. I'm sure there are things 
 that you might find easy that I would find hard. But that 
 shouldn't stop me from learning about them. It makes me 
 "smarter", to simplify the discussion.
I am, because I believe it to be generally true for "1 keyword |-> 1 meaning" to be easier to read than "1 keyword and 1 context |-> 1 meaning" as the former inherently takes less time.
 [...]
Well, yes, as I wrote, I think it is unambiguous (and can thus be used), I just think it shouldn't be used.
Yes, but you have only given the reason that it shouldn't be used because you believe that one shouldn't overload keywords because it makes it harder to parse the meaning. My rebuttal, as I have said, is that it is not harder, so your argument is not valid. All you could do is claim that it is hard and we would have to find out who is more right.
As I countered that in the above, I don't think your rebuttal is valid.
 I have a logical argument against your absolute restriction 
 though... in that it causes one to have to use more symbols. I 
 would imagine you are against stuff like using "in1", "in2", 
 etc because they visibly are to close to each other.
It's not an absolute restriction, it's an absolute position from which I argue against including such overloading on principle. If it can be overcome by demonstrating that it can't sensibly be done without more overloading and that it adds enough value to be worth the increases overloading, I'd be fine with inclusion.
 [...]
I would much rather see it as a generalization of existing template specialization syntax [1], which this is t.b.h. just a superset of (current syntax allows limiting to exactly one, you propose limiting to 'n'): --- foo(T: char) // Existing syntax: Limit T to the single type `char` foo(T: (A, B, C)) // New syntax: Limit T to one of A, B, or C ---
Yes, if this worked, I'd be fine with it. Again, I could care less. `:` == `in` for me as long as `:` has the correct meaning of "can be one of the following" or whatever. But AFAIK, : is not "can be one of the following"(which is "in" or "element of" in the mathematical sense) but can also mean "is a derived type of".
Right, ":" is indeed an overloaded symbol in D (and ironically, instead of with "in", I think all its meanings are valuable enough to be worth the cost). I don't see how that would interfere in this context, though, as we don't actually overload a new meaning (it's still "restrict this type to the thing to the right").

 If that is the case then go for it ;) It is not a concern of 
 mine. You tell me the syntax and I will use it. (I'd have no 
 choice, of course, but if it's short and sweet then I won't 
 have any problem).
I'm discussing this as a matter of theory, I don't have a use for it.
 [...]
Quoting a certain person (you know who you are) from DConf 2017: "Write a DIP". I'm quite happy to discuss this idea, but at the end of the day, as it's not an insignificant change to the language someone will to do the work and write a proposal.
My main issues with going through the trouble is that basically I have more important things to do. If I were going to try to get D to do all the changes I actually wanted, I'd be better off writing my own language the way I envision it and want it... but I don't have 10+ years to invest in such a beast and to do it right would require my full attention, which I'm not willing to give, because again, I have better things to do(things I really enjoy). So, all I can do is hopefully stoke the fire enough to get someone else interested in the feature and have them do the work. If they don't, then they don't, that is fine. But I feel like I've done something to try to right a wrong.
That could happen, though historically speaking, usually things have gotten included in D only when the major proponent of something like this does the hard work (otherwise they seem to just fizzle out).
 [...]
AFAIK the difference between syntax sugar and enabling syntax in PLs usually comes down to the former allowing you to express concepts already representable by other constructs in the PL; when encountered, the syntax sugar could be lowered by the compiler to the more verbose syntax and still be both valid in the PL and recognizable as the concept (while this is vague, a prominent example would be lambdas in Java 8).
Yes, but everything is "lowered" it's just how you define it.
Yes and w.r.t to my initial point, I did define it as "within the PL itself, preserving the concept".
 [...]
Why do you think that? Less than ten people have participated in this thread so far.
I am not talking about just this thread, I am talking about in all threads and all things in which humans attempt to determine the use of something. [...]
Fair enough, though personally I'd need to see empirical proof of those general claims about human behaviour before I could share that position.
 [...]
Why do you assume that? I've not seen anyone here claiming template parameter specialization to one of n types (which is the idea I replied to) couldn't be done in theory, only that it can't be done right now (the only claim as to that it can't be done I noticed was w.r.t. (unspecialized) templates and virtual functions, which is correct due to D supporting separate compilation; specialized templates, however, should work in theory).
Let me quote the first two responses: "It can't work this way. You can try std.variant."
That is a reply to your mixing (unspecialized) templates and virtual functions, not to your idea of generalizing specialized templates.
 and

 "It is not possible to have a function be both virtual and 
 templated. A function template generates a new function 
 definition every time that it's a called with a new set of 
 template arguments. [...]"
Same here.
 Now, I realize I might have no been clear about things and 
 maybe there is confusion/ambiguity in what I meant, how they 
 interpreted it, or how I interpreted their response... but 
 there is definitely no sense of "Yes, we can make this work in 
 some way..." type of mentality.

 e.g., "Templates and virtual functions simply don't mix."

 That is an absolute statement. It isn't even qualified with "in 
 D".

 [...]
Actually, I disagree here. It only *needs* filling if enough users of the language actually care about it not being there. Otherwise, it's a *nice to have* (like generics and Go, or memory safety and C :p ).
Yes, on some level you are right... but again, who's to judge? [...]
Ultimately, Walter and Andrei, as AFAIK they decide what gets into the language.
Sep 03
parent reply EntangledQuanta <EQ universe.com> writes:
On Sunday, 3 September 2017 at 11:48:38 UTC, Moritz Maxeiner 
wrote:
 On Sunday, 3 September 2017 at 04:18:03 UTC, EntangledQuanta 
 wrote:
 On Sunday, 3 September 2017 at 02:39:19 UTC, Moritz Maxeiner 
 wrote:
 On Saturday, 2 September 2017 at 23:12:35 UTC, 
 EntangledQuanta wrote:
 [...]
The contexts being independent of each other doesn't change that we would still be overloading the same keyword with three vastly different meanings. Two is already bad enough imho (and if I had a good idea with what to replace the "in" for AA's I'd propose removing that meaning).
Why? Don't you realize that the contexts matters and [...]
Because instead of seeing the keyword and knowing its one meaning you also have to consider the context it appears in. That is intrinsically more work (though the difference may be very small) and thus harder. ...
Yes, In an absolute sense, it will take more time to have to parse the context. But that sounds like a case of "pre-optimization". If we are worried about saving time then what about the tooling? compiler speed? IDE startup time? etc? All these take time too and optimizing one single aspect, as you know, won't necessarily save much time. Maybe the language itself should be designed so there are no ambiguities at all? A single simple for each function? A new keyboard design should be implemented(ultimately a direct brain to editor interface for the fastest time, excluding the time for development and learning)? So, in this case I have to go with the practical of saying that it may be theoretically slower, but it is such an insignificant cost that it is an over optimization. I think you would agree, at least in this case. Again, the exact syntax is not import to me. If you really think it matters that much to you and it does(you are not tricking yourself), then use a different keyword. When I see something I try to see it at once rather than reading it left to right. It is how music is read properly, for example. One can't read left to right and process the notes in real time fast enough. You must "see at once" a large chunk. When I see foo(A in B)() I see it at once, not in parts or sub-symbols(subconsciously that may be what happens, but it either is so quick or my brain has learned to see differently that I do not feel it to be any slower). that is, I do not read it like f, o, o (, A, , i,... but just like how one sees an image. Sure, there are clustering such as foo and (...), and I do sub-parse those at some point, but the context is derived very quickly. Now, of course, I do make assumptions to be able to do that. Obviously I have to sorta assume I'm reading D code and that the expression is a templated function, etc. But that is required regardless. It's like seeing a picture of an ocean. You can see the global characteristics immediately without getting bogged down in the details until you need it. You can determine the approximate time of day(morning, noon, evening, night) relatively instantaneously without even knowing much else. To really counter your argument: What about parenthesis? They too have the same problem with in. They have perceived ambiguity... but they are not ambiguity. So your argument should be said about them too and you should be against them also, but are you? [To be clear here: foo()() and (3+4) have 3 different use cases of ()'s... The first is templated arguments, the second is function arguments, and the third is expression grouping] If you are, then you are being logical and consistent, If you are not, then you are not being logical nor consistent. If you fall in the latter case, I suggest you re-evaluate the way you think about such things because you are picking and choosing. Now, if you are just stating a mathematical fast that it takes longer, then I can't really deny that, although I can't technically prove it either as you can't because we would require knowing exactly how the brain processes the information.
 [...]
Well, yes, as I wrote, I think it is unambiguous (and can thus be used), I just think it shouldn't be used.
Yes, but you have only given the reason that it shouldn't be used because you believe that one shouldn't overload keywords because it makes it harder to parse the meaning. My rebuttal, as I have said, is that it is not harder, so your argument is not valid. All you could do is claim that it is hard and we would have to find out who is more right.
As I countered that in the above, I don't think your rebuttal is valid.
Well, hopefully I countered that in my rebuttal of your rebuttal of my rebuttal ;) Again, you don't actually know how the brain processes information(no one does, it is all educated guesses). You use the concept that the more information one has to process the more time it takes... which seems logical, but it is not necessarily applicable directly to the interpretation of written symbols. Think of an image. We can process a ton of information nearly instantly, and if the logic applied, we would expect images to take much longer to "read" than the written word, yet it is exactly the opposite... and yet, symbols are just images(with a specific order we must follow to make sense of them). Have you ever thought of a programming language that was based on images? Maybe that would be a much quicker way and much faster to "read" the source? Of course, some might claim that all life is is source code and "real life" is just the most natural representation of code.
 I have a logical argument against your absolute restriction 
 though... in that it causes one to have to use more symbols. I 
 would imagine you are against stuff like using "in1", "in2", 
 etc because they visibly are to close to each other.
It's not an absolute restriction, it's an absolute position from which I argue against including such overloading on principle. If it can be overcome by demonstrating that it can't sensibly be done without more overloading and that it adds enough value to be worth the increases overloading, I'd be fine with inclusion.
My feeling is though you are actually just making principles based on whim rather than a true logical basis, I could be wrong. Depending on how you answer my questions above will let me know better. To simplify it down: Do you have the sample problems with all the ambiguities that already exist in almost all programming languages that everyone is ok with on a practical level on a daily basis?
 [...]
 If that is the case then go for it ;) It is not a concern of 
 mine. You tell me the syntax and I will use it. (I'd have no 
 choice, of course, but if it's short and sweet then I won't 
 have any problem).
I'm discussing this as a matter of theory, I don't have a use for it.
Ok, I do, which is what led me to the problem, as all my "enhancements" do. I try something I think is an "elegant" way to simplify complexity in my program(from the user of the code's perspective, which will generally be me)... I run in to a wall, I post a message, and I usually get shot down immediately with "It can't be done"... then I have to find a way to do it. I find the way[usually using string mixins, thank god for them]. Post it... someone else then usually comes along with a better or simpler way. Usually when I say something like "This should be in the compiler", I immediately get shot down again with "It adds complexity to the compiler". In which case I try to to explain that everything adds complexity and this solution would add very little complexity since one can already do it in the library in a simple way... Usually the library solution is not robust and hence not good(I only worked it out enough for my use cases). ...and so the wheel goes around and around. But the logic is usually the same. "we can't do that".... which I eventually just interpret as "we don't wanna do that because we have better things to do", which is fine if at least that was admitted in the first place instead of wasting my time trying to explain that it can be done, coming up with a solution, etc. (of course, it's ultimately my fault since I am the one in control of my time, I mainly do it because it could help others in the same position that I was in)
 [...]
Quoting a certain person (you know who you are) from DConf 2017: "Write a DIP". I'm quite happy to discuss this idea, but at the end of the day, as it's not an insignificant change to the language someone will to do the work and write a proposal.
My main issues with going through the trouble is that basically I have more important things to do. If I were going to try to get D to do all the changes I actually wanted, I'd be better off writing my own language the way I envision it and want it... but I don't have 10+ years to invest in such a beast and to do it right would require my full attention, which I'm not willing to give, because again, I have better things to do(things I really enjoy). So, all I can do is hopefully stoke the fire enough to get someone else interested in the feature and have them do the work. If they don't, then they don't, that is fine. But I feel like I've done something to try to right a wrong.
That could happen, though historically speaking, usually things have gotten included in D only when the major proponent of something like this does the hard work (otherwise they seem to just fizzle out).
Yes. Because things take time and we only have so much. I am fine with that. I'm fine with a great idea going no where because no one has the time to invest in it. It's unfortunate but life is life... it's only when people ultimately are trying to deceive that or are just truly ignorant when I start to have a problem with them.
 [...]
AFAIK the difference between syntax sugar and enabling syntax in PLs usually comes down to the former allowing you to express concepts already representable by other constructs in the PL; when encountered, the syntax sugar could be lowered by the compiler to the more verbose syntax and still be both valid in the PL and recognizable as the concept (while this is vague, a prominent example would be lambdas in Java 8).
Yes, but everything is "lowered" it's just how you define it.
Yes and w.r.t to my initial point, I did define it as "within the PL itself, preserving the concept".
 [...]
Why do you think that? Less than ten people have participated in this thread so far.
I am not talking about just this thread, I am talking about in all threads and all things in which humans attempt to determine the use of something. [...]
Fair enough, though personally I'd need to see empirical proof of those general claims about human behaviour before I could share that position.
Lol, you should have plenty of proof. Just look around. Just look at your own experiences in your life. I don't know much about you but I imagine that you have all the proof you need. Look how businesses are ran. Look how people "solve" problems. Look at the state of the world. You can make claims that it's this and that, as I can... but there is a common denominator among it all. Also just think about how humans are able to judge things. Surely they can only judge it based on what they know? How can we judge things based on what we don't know? Seems impossible, right? Take someone you know that makes constantly makes bad decisions... why? Are they geniuses or idiots? I think it's pretty provable that the more intelligent a person is the better they are able to make decisions about something... and this is general. A programmer is surely able to make better decisions about coding than a non-programmer? Look at all the business people in the world who know absolutely nothing about technological factors but make such decisions about them on a daily basis... and the ramifications of those decisions are easily seen. I'm not saying it's a simple problem, but there are relatively simple overarching rules involved. The more a person knows about life the more they can make better decisions about life. (but the life thing is the complex part, I don't disagree) To make this tie in to what we are talking about: If someone never used templated functions in D, how can they make decisions on whether templated functions are useful or not? Should be obvious. The complexity comes in with they actually have used them... but then we have to know "How much do they use them", "How do they use them", "What other things do they know about that influence there usage of them", etc? Most people are satisfies with just stopping at some arbitrary point when they get tired and have to go to bed... I'm not one of those people(for better or worse).
 [...]
Why do you assume that? I've not seen anyone here claiming template parameter specialization to one of n types (which is the idea I replied to) couldn't be done in theory, only that it can't be done right now (the only claim as to that it can't be done I noticed was w.r.t. (unspecialized) templates and virtual functions, which is correct due to D supporting separate compilation; specialized templates, however, should work in theory).
Let me quote the first two responses: "It can't work this way. You can try std.variant."
That is a reply to your mixing (unspecialized) templates and virtual functions, not to your idea of generalizing specialized templates.
That might have been the reply, and it may be valid in a certain context, and may actually be the correct reply in the context I gave(I could have done a better job, I admit), BUT, if D already implemented such a specialization feature, a different response would have occurred such as: "You need to limit T to be in a finite set", which I would have merrily moved along. But it tries to force me to in a solution that is not acceptable. In fact, I was using specialization as `T` could only be from a finite set... but, again, D does not allow me any way to specify that, so how could I properly formulate a solution that would make sense without going in to a lot of detail... a lot of details that I actually don't know because I'm not a full time D aficionado. The code I posted was a simplification, possibly an oversimplification, of my real code in which I tried to express something I wanted to do, knew that there should be no real technical limitations(in what I wanted, not in how D does it), and thought that D should be able to D it in some way(mainly because it can do just about anything in some way due to it's rich feature set).
 and

 "It is not possible to have a function be both virtual and 
 templated. A function template generates a new function 
 definition every time that it's a called with a new set of 
 template arguments. [...]"
Same here.
But it's not true... unless you mean that "it is not possible currently in D to do this. Neither of those statements are logically valid, because it is possible(Only with a restricted number of template parameter values). It is only true about an infinite number, which didn't apply to me since I had a finite number. Basically an absolute statement is made: something like "All numbers are odd", which is absolute false even if it is partially true. "All odd numbers are odd" is obviously true. One should even clarify, if the context isn't clear so no confusion arise. "It is not possible to have a function be both virtual and templated." Surely you disagree with that statement? While there is some ambiguity, since templated functions are actually syntactic sugar while virtual functions are actually coded, we can obviously have a virtual templated function. (Not in D currently, but there is no theoretical reason why it can't exist, we've already discussed on that) "It is not possible to have a function be both virtual and [arbitrarily] templated." Would, I believe, be a true statement. while "It is not possible to have a function be both virtual and [finitely] templated." would be a false statement. In fact, I bet if you asked Jonathan, what he believed when he wrote that, that he believed it to be true for all cases(finite or not, as he probably never even thought about the finite case enough to realize it matters). Anyways, we've beat this horse to death! I think we basically agree on the bulk of things, so it's not a big deal. Most of the issue with communication is the lack of clarity and the ambiguity in things(wars have been started and millions of people have died over such things as have many personal relationships destroyed). I'd like to see such a feature implemented in D one day, but I doubt it will for whatever reasons. Luckily D is powerful enough to still get at a solid solution.. unlike some languages, and I think that is what most of us here realize about D and why we even bother with it.
Sep 03
parent reply Moritz Maxeiner <moritz ucworks.org> writes:
On Sunday, 3 September 2017 at 23:25:47 UTC, EntangledQuanta 
wrote:
 On Sunday, 3 September 2017 at 11:48:38 UTC, Moritz Maxeiner 
 wrote:
 On Sunday, 3 September 2017 at 04:18:03 UTC, EntangledQuanta 
 wrote:
 On Sunday, 3 September 2017 at 02:39:19 UTC, Moritz Maxeiner 
 wrote:
 On Saturday, 2 September 2017 at 23:12:35 UTC, 
 EntangledQuanta wrote:
 [...]
The contexts being independent of each other doesn't change that we would still be overloading the same keyword with three vastly different meanings. Two is already bad enough imho (and if I had a good idea with what to replace the "in" for AA's I'd propose removing that meaning).
Why? Don't you realize that the contexts matters and [...]
Because instead of seeing the keyword and knowing its one meaning you also have to consider the context it appears in. That is intrinsically more work (though the difference may be very small) and thus harder. ...
Yes, In an absolute sense, it will take more time to have to parse the context. But that sounds like a case of "pre-optimization".
I don't agree, because once something is in the language syntax, removing it is a long deprecation process (years), so these things have to be considered well beforehand.
 If we are worried about saving time then what about the 
 tooling? compiler speed? IDE startup time? etc?
 All these take time too and optimizing one single aspect, as 
 you know, won't necessarily save much time.
Their speed generally does not affect the time one has to spend to understand a piece of code.
 Maybe the language itself should be designed so there are no 
 ambiguities at all? A single simple for each function? A new 
 keyboard design should be implemented(ultimately a direct brain 
 to editor interface for the fastest time, excluding the time 
 for development and learning)?
I assume you mean "without context sensitive meanings" instead of "no ambiguities", because the latter should be the case as a matter of course (and mostly is, with few exceptions such as the dangling else ambiguity in C and friends). Assuming the former: As I stated earlier, it needs to be worth the cost.
 So, in this case I have to go with the practical of saying that 
 it may be theoretically slower, but it is such an insignificant 
 cost that it is an over optimization. I think you would agree, 
 at least in this case.
Which is why I stated I'm opposing overloading `in` here as a matter of principle, because even small costs sum up in the long run if we get into the habit of just overloading.
 Again, the exact syntax is not import to me. If you really 
 think it matters that much to you and it does(you are not 
 tricking yourself), then use a different keyword.
My proposal remains to not use a keyword and just upgrade existing template specialization.
 When I see something I try to see it at once rather [...]
 To really counter your argument: What about parenthesis? They 
 too have the same problem with in. They have perceived 
 ambiguity... but they are not ambiguity. So your argument 
 should be said about them too and you should be against them 
 also, but are you? [To be clear here: foo()() and (3+4) have 3 
 different use cases of ()'s... The first is templated 
 arguments, the second is function arguments, and the third is 
 expression grouping]
That doesn't counter my argument, it just states that parentheses have these costs, as well (which they do). The primary question would still be if they're worth that cost, which imho they are. Regardless of that, though, since they are already part of the language syntax (and are not going to be up for change), this is not something we could do something about, even if we agreed they weren't worth the cost. New syntax, however, is up for that kind of discussion, because once it's in it's essentially set in stone (not quite, but *very* slow to remove/change because of backwards compatibility).
 [...]
Well, yes, as I wrote, I think it is unambiguous (and can thus be used), I just think it shouldn't be used.
Yes, but you have only given the reason that it shouldn't be used because you believe that one shouldn't overload keywords because it makes it harder to parse the meaning. My rebuttal, as I have said, is that it is not harder, so your argument is not valid. All you could do is claim that it is hard and we would have to find out who is more right.
As I countered that in the above, I don't think your rebuttal is valid.
Well, hopefully I countered that in my rebuttal of your rebuttal of my rebuttal ;)
Not as far as I see it, though I'm willing to agree to disagree :)
 I have a logical argument against your absolute restriction 
 though... in that it causes one to have to use more symbols. 
 I would imagine you are against stuff like using "in1", 
 "in2", etc because they visibly are to close to each other.
It's not an absolute restriction, it's an absolute position from which I argue against including such overloading on principle. If it can be overcome by demonstrating that it can't sensibly be done without more overloading and that it adds enough value to be worth the increases overloading, I'd be fine with inclusion.
[...] To simplify it down: Do you have the sample problems with all the ambiguities that already exist in almost all programming languages that everyone is ok with on a practical level on a daily basis?
Again, you seem to mix ambiguity and context sensitivity. W.r.t. the latter: I have a problem with those occurences where I don't think the costs I associate with it are outweighed by its benefits (e.g. with the `in` keyword overloaded meaning for AA's).
 [...]
Why do you think that? Less than ten people have participated in this thread so far.
I am not talking about just this thread, I am talking about in all threads and all things in which humans attempt to determine the use of something. [...]
Fair enough, though personally I'd need to see empirical proof of those general claims about human behaviour before I could share that position.
Lol, you should have plenty of proof. Just look around. [...]
Anectodes/generalizing from personal experiences do not equate proof (which is why they're usually accompanied by things like "in my experience").
 I'd like to see such a feature implemented in D one day, but I 
 doubt it will for whatever reasons. Luckily D is powerful 
 enough to still get at a solid solution.. unlike some 
 languages, and I think that is what most of us here realize 
 about D and why we even bother with it.
Well, so far the (singular) reason is that nobody that wants it in the language has invested the time to write a DIP :p
Sep 03
parent reply EntangledQuanta <EQ universe.com> writes:
On Monday, 4 September 2017 at 01:50:48 UTC, Moritz Maxeiner 
wrote:
 On Sunday, 3 September 2017 at 23:25:47 UTC, EntangledQuanta 
 wrote:
 On Sunday, 3 September 2017 at 11:48:38 UTC, Moritz Maxeiner 
 wrote:
 On Sunday, 3 September 2017 at 04:18:03 UTC, EntangledQuanta 
 wrote:
 On Sunday, 3 September 2017 at 02:39:19 UTC, Moritz Maxeiner 
 wrote:
 On Saturday, 2 September 2017 at 23:12:35 UTC, 
 EntangledQuanta wrote:
 [...]
The contexts being independent of each other doesn't change that we would still be overloading the same keyword with three vastly different meanings. Two is already bad enough imho (and if I had a good idea with what to replace the "in" for AA's I'd propose removing that meaning).
Why? Don't you realize that the contexts matters and [...]
Because instead of seeing the keyword and knowing its one meaning you also have to consider the context it appears in. That is intrinsically more work (though the difference may be very small) and thus harder. ...
Yes, In an absolute sense, it will take more time to have to parse the context. But that sounds like a case of "pre-optimization".
I don't agree, because once something is in the language syntax, removing it is a long deprecation process (years), so these things have to be considered well beforehand.
That's true. But I don't see how it matters to much in the current argument. Remember, I'm not advocating using 'in' ;) I'm only saying it doesn't matter in a theoretical sense. If humans were as logical as they should be, it would matter less. For example, a computer has no issue with using `in`, and it doesn't really take any more processing(maybe a cycle, but the context makes it clear). But, of course, we are not computers. So, in a practical sense, yes, the line has to be draw somewhere, even if, IMO, it is not the best place. You agree with this because you say it's ok for parenthesis but not in. You didn't seem to answer anything about my statements and question about images though. But, I'm ok with people drawing lines in the sand, that really isn't what I'm arguing. We have to draw lines. My point is, we should know we are drawing lines. You seem to know this on some significant level, but I don't think most people do. So, what would happen, if we argued for the next 10 years, we would just come to some refinement of our current opinions and experiences about the idea. That's a good thing in a sense, but I don't have 10 years to waste on such a trivial concept that really doesn't matter much ;) (again, remember, I'm not advocating in, I'm advocating anything, but against doing nothing.)
 If we are worried about saving time then what about the 
 tooling? compiler speed? IDE startup time? etc?
 All these take time too and optimizing one single aspect, as 
 you know, won't necessarily save much time.
Their speed generally does not affect the time one has to spend to understand a piece of code.
Yes, but you are picking and choosing. To understand code, you have to write code, to write code you need a compiler, ide, etc. You need a book, the internet, or other resources to learn things too. It's a much much bigger can of worms than you realize or want to get in to. Everything is interdependent. It's nice to make believe that we can separate everything in to nice little quanta, but we can't, and when we ultimately try we get results that make no sense. But, of course, it's about the best we can do with where humans are at in their evolution currently. The ramifications of one minor change can change everything... See the butterfly effect. Life is fractal-life, IMO(I can't prove it but the evidence is staggering). I mean, when you say "read code faster" I assume you mean the moment you start to read a piece of code with your eyes to the end of the code... But do you realize that, in some sense, that is meaningless? What about the time it takes to turn on your computer? Why are you not including that? Or the time to scroll your mouse? These things matter because surely you are trying to save time in the "absolute" sense? e.g., so you have more time to spend with your family at the end of the day? Or spend more time hitting a little white ball in a hole? or whatever? If all you did was read code and had no other factors involved in the absolute time, then you would be 100% correct. But all those other factors do add up too. Of course, the more code you read the more important it becomes and the less the other factors become, but then why are you reading so much code if you think it's a waste of time? So you can save some more time to read more code? If your goal is to truly read as much code as you can in your life span, then I think your analysis is 99.999...% correct. If you only code as a means to an end for other things, then I think your answer is about 10-40% correct(with a high degree of error and dependent on context). For me, and the way I "value"/"judge" time, is, how much stuff can I fit in a day of my life that I like to do and how can I minimize the things that I ultimately do not want to do. Coding is one of those things I do not like to do. I do it as a means to an end. Hence, having tooling, IDE's, compilers, etc that help me do what I want to do coding wise as fast as possible(overall) is what is important. I just think here you are focusing on one tiny aspect of the picture. It's not a bad thing, optimizing the whole requires optimizing all the parts. Just make sure you don't get caught up in optimizing something that isn't really that important(You know this, because you are a coder, but it applies to life too, because are all just "code" anyways).
 Maybe the language itself should be designed so there are no 
 ambiguities at all? A single simple for each function? A new 
 keyboard design should be implemented(ultimately a direct 
 brain to editor interface for the fastest time, excluding the 
 time for development and learning)?
I assume you mean "without context sensitive meanings" instead of "no ambiguities", because the latter should be the case as a matter of course (and mostly is, with few exceptions such as the dangling else ambiguity in C and friends). Assuming the former: As I stated earlier, it needs to be worth the cost.
yes, I mean what I called "perceived ambiguities" because true ambiguities are impossible to compile logically, they are "errors".
 So, in this case I have to go with the practical of saying 
 that it may be theoretically slower, but it is such an 
 insignificant cost that it is an over optimization. I think 
 you would agree, at least in this case.
Which is why I stated I'm opposing overloading `in` here as a matter of principle, because even small costs sum up in the long run if we get into the habit of just overloading.
I know, You just haven't convinced me enough to change my opinion that it really matters at the end of the day. It's going to be hard to convince me since I really don't feel as strongly as you do about it. That might seem like a contradiction, but
 Again, the exact syntax is not import to me. If you really 
 think it matters that much to you and it does(you are not 
 tricking yourself), then use a different keyword.
My proposal remains to not use a keyword and just upgrade existing template specialization.
I think that is a better way too because it is based on a solid principle: https://en.wikipedia.org/wiki/Relational_theory, in a sense. I see it more that things make more sense to the brain the closer those things are in relationship. Space may or may not have absolute meaning without objects, but humans can understand space better when there is stuff inside it. (stuff that relates to space, which I think some call feng shui ;) You just really haven't stated that principle in any clear way for me to understand what you mean until now. i.e., Stating something like "... of a matter of principle" without stating which principle is ambiguous. Because some principles are not real. Some base their principles on fictitious things, some on abstract ideals, etc. Basing something on a principle that is firmly established is meaningful.
 When I see something I try to see it at once rather [...]
 To really counter your argument: What about parenthesis? They 
 too have the same problem with in. They have perceived 
 ambiguity... but they are not ambiguity. So your argument 
 should be said about them too and you should be against them 
 also, but are you? [To be clear here: foo()() and (3+4) have 3 
 different use cases of ()'s... The first is templated 
 arguments, the second is function arguments, and the third is 
 expression grouping]
That doesn't counter my argument, it just states that parentheses have these costs, as well (which they do). The primary question would still be if they're worth that cost, which imho they are. Regardless of that, though, since they are already part of the language syntax (and are not going to be up for change), this is not something we could do something about, even if we agreed they weren't worth the cost. New syntax, however, is up for that kind of discussion, because once it's in it's essentially set in stone (not quite, but *very* slow to remove/change because of backwards compatibility).
Well, all I can really say about it is that one can't really know the costs. I've said that before. We guess. Hence, the best way out of this box is usually through experiment. We try something and see how it feels and if it seems to work. I'm taking about in general
 [...]
Well, yes, as I wrote, I think it is unambiguous (and can thus be used), I just think it shouldn't be used.
Yes, but you have only given the reason that it shouldn't be used because you believe that one shouldn't overload keywords because it makes it harder to parse the meaning. My rebuttal, as I have said, is that it is not harder, so your argument is not valid. All you could do is claim that it is hard and we would have to find out who is more right.
As I countered that in the above, I don't think your rebuttal is valid.
Well, hopefully I countered that in my rebuttal of your rebuttal of my rebuttal ;)
Not as far as I see it, though I'm willing to agree to disagree :)
 I have a logical argument against your absolute restriction 
 though... in that it causes one to have to use more symbols. 
 I would imagine you are against stuff like using "in1", 
 "in2", etc because they visibly are to close to each other.
It's not an absolute restriction, it's an absolute position from which I argue against including such overloading on principle. If it can be overcome by demonstrating that it can't sensibly be done without more overloading and that it adds enough value to be worth the increases overloading, I'd be fine with inclusion.
[...] To simplify it down: Do you have the sample problems with all the ambiguities that already exist in almost all programming languages that everyone is ok with on a practical level on a daily basis?
Again, you seem to mix ambiguity and context sensitivity. W.r.t. the latter: I have a problem with those occurences where I don't think the costs I associate with it are outweighed by its benefits (e.g. with the `in` keyword overloaded meaning for AA's).
Not mixing, I exclude real ambiguities because have no real meaning. I thought I mentioned something about that way back when, but who knows... Although, I'd be curious if any programming languages existed who's grammar was ambiguous and actually could be realized? So, my "[perceived] ambiguity" is your context sensitivity. But I was more trying to hint at how an arbitrary human my be confused by seeing the same thing used in to different contexts having two different meanings. They tend to just see them as ambiguities at first and are confused, until they learn the context, in which case the ambiguities no longer exist. They weren't real ambiguities in the first place but they "perceived" them as such. Usually context sensitivity, in the context of programming languages, has a very specific interpretation so I didn't want to use it.
 [...]
Why do you think that? Less than ten people have participated in this thread so far.
I am not talking about just this thread, I am talking about in all threads and all things in which humans attempt to determine the use of something. [...]
Fair enough, though personally I'd need to see empirical proof of those general claims about human behaviour before I could share that position.
Lol, you should have plenty of proof. Just look around. [...]
Anectodes/generalizing from personal experiences do not equate proof (which is why they're usually accompanied by things like "in my experience").
There is no such thing as proof in life. If their was, we'd surely have something close to it in life. At best, we have mathematical proof. It might be that existence is mathematical(seems to be so, as mathematics can be used to explain the relationships between just about anything). But human behavior is pretty typical and has patterns. Just like most phenomena. As much as humans have discovered bout life, the general pattern is that the more we learn the more see that there are underlying factors that generate these patterns. And so, if you believe in these pattern oriented nature of life(which is more fractal like/self similarity) you can start connecting dots. It may turn out that you connected them wrong, but that's a step in the right direction. You reconnect things and learn something new. Draw a new line... Look how children behave. You remember how you were a child, the things that went on. Do you think that once a human "grows" up that somehow they change and grow beyond those behaviors? Or is it more logical that those behaviors just "morph" in to 'new' behaviors that are really just functions of the old? When you realize that people are just children that have older bodies and more experiences, you start to see patterns. E.g., politicians. They are just certain children, You might have had a friend that, now that you look back, could say that he was a "politician". (or whatever). Grown up behavior is just child behavior that is grown up. It is not completely different. The same is said of programming languages. Programming languages just do jump from one thing to another but evolve in a continuous way. What we experience now is the evolution of everything that was before. There are no holes or gaps or leaps. It is a differential function, so to speak(but of, probably, an infinite number of dimensions). Everything is connected/related. Anyways, I think we are starting to drift in the weeds(but that is usually were the big fish hide!) ;)
 I'd like to see such a feature implemented in D one day, but I 
 doubt it will for whatever reasons. Luckily D is powerful 
 enough to still get at a solid solution.. unlike some 
 languages, and I think that is what most of us here realize 
 about D and why we even bother with it.
Well, so far the (singular) reason is that nobody that wants it in the language has invested the time to write a DIP :p
Yep. I guess the problem with the D community is that there are no real "champions" of it. Walter is not a King Arthur. I feel that he's probably lost a lot of his youthful zeal that is usually what provides the impetus for great things. Many of the contributors here do not make money off of D in any significant way and hence do it more as a hobby. So the practical side of things prevent D from really accomplishing great heights(at this point). I hope there is enough thrust for escape velocity though(My feeling is there isn't, but I hope I'm wrong). Like you say about saving cycles reading, well, if D isn't going to be around in any significant way(it starts to stagnate in the next few years), then my investment of time will not be very rewarded. I've learned a lot of new programming things and some life stuff from it so it's not a total lost, but it will be a shame(not just for me).. What we know for sure, if D does progress at a specific "rate", it will be overtaken by other languages and eventually die out. This is a fact, as it will happen(everything dies that lives, another "over generalization" born in circumstantial evidence but that everyone should be able to agree on...). D has to keep up with the Kardashians if it want's to be cool... unfortunately.
Sep 03
parent Moritz Maxeiner <moritz ucworks.org> writes:
On Monday, 4 September 2017 at 03:08:50 UTC, EntangledQuanta 
wrote:
 On Monday, 4 September 2017 at 01:50:48 UTC, Moritz Maxeiner 
 wrote:
 On Sunday, 3 September 2017 at 23:25:47 UTC, EntangledQuanta 
 wrote:
 On Sunday, 3 September 2017 at 11:48:38 UTC, Moritz Maxeiner 
 wrote:
 On Sunday, 3 September 2017 at 04:18:03 UTC, EntangledQuanta 
 wrote:
 On Sunday, 3 September 2017 at 02:39:19 UTC, Moritz 
 Maxeiner wrote:
 On Saturday, 2 September 2017 at 23:12:35 UTC, 
 EntangledQuanta wrote:
 [...]
The contexts being independent of each other doesn't change that we would still be overloading the same keyword with three vastly different meanings. Two is already bad enough imho (and if I had a good idea with what to replace the "in" for AA's I'd propose removing that meaning).
Why? Don't you realize that the contexts matters and [...]
Because instead of seeing the keyword and knowing its one meaning you also have to consider the context it appears in. That is intrinsically more work (though the difference may be very small) and thus harder. ...
Yes, In an absolute sense, it will take more time to have to parse the context. But that sounds like a case of "pre-optimization".
I don't agree, because once something is in the language syntax, removing it is a long deprecation process (years), so these things have to be considered well beforehand.
That's true. But I don't see how it matters to much in the current argument. Remember, I'm not advocating using 'in' ;) [...]
It matters, because that makes it not be _early_ optimization.
 If we are worried about saving time then what about the 
 tooling? compiler speed? IDE startup time? etc?
 All these take time too and optimizing one single aspect, as 
 you know, won't necessarily save much time.
Their speed generally does not affect the time one has to spend to understand a piece of code.
Yes, but you are picking and choosing. [...]
I'm not (in this case), as the picking is implied by discussing PL syntax.
 So, in this case I have to go with the practical of saying 
 that it may be theoretically slower, but it is such an 
 insignificant cost that it is an over optimization. I think 
 you would agree, at least in this case.
Which is why I stated I'm opposing overloading `in` here as a matter of principle, because even small costs sum up in the long run if we get into the habit of just overloading.
I know, You just haven't convinced me enough to change my opinion that it really matters at the end of the day. It's going to be hard to convince me since I really don't feel as strongly as you do about it. That might seem like a contradiction, but
I'm not trying to convince you of anything.
 Again, the exact syntax is not import to me. If you really 
 think it matters that much to you and it does(you are not 
 tricking yourself), then use a different keyword.
My proposal remains to not use a keyword and just upgrade existing template specialization.
[...] You just really haven't stated that principle in any clear way for me to understand what you mean until now. i.e., Stating something like "... of a matter of principle" without stating which principle is ambiguous. Because some principles are not real. Some base their principles on fictitious things, some on abstract ideals, etc. Basing something on a principle that is firmly established is meaningful.
I've stated the principle several times in varied forms of "syntax changes need to be worth the cost".
 I have a logical argument against your absolute restriction 
 though... in that it causes one to have to use more 
 symbols. I would imagine you are against stuff like using 
 "in1", "in2", etc because they visibly are to close to each 
 other.
It's not an absolute restriction, it's an absolute position from which I argue against including such overloading on principle. If it can be overcome by demonstrating that it can't sensibly be done without more overloading and that it adds enough value to be worth the increases overloading, I'd be fine with inclusion.
[...] To simplify it down: Do you have the sample problems with all the ambiguities that already exist in almost all programming languages that everyone is ok with on a practical level on a daily basis?
Again, you seem to mix ambiguity and context sensitivity. W.r.t. the latter: I have a problem with those occurences where I don't think the costs I associate with it are outweighed by its benefits (e.g. with the `in` keyword overloaded meaning for AA's).
Not mixing, I exclude real ambiguities because have no real meaning. I thought I mentioned something about that way back when, but who knows... Although, I'd be curious if any programming languages existed who's grammar was ambiguous and actually could be realized?
Sure, see the dangling else problem I mentioned. It's just that people basically all agree on one of the choices and all stick with it (despite the grammar being formally ambiguous).
 [...]
Why do you think that? Less than ten people have participated in this thread so far.
I am not talking about just this thread, I am talking about in all threads and all things in which humans attempt to determine the use of something. [...]
Fair enough, though personally I'd need to see empirical proof of those general claims about human behaviour before I could share that position.
Lol, you should have plenty of proof. Just look around. [...]
Anectodes/generalizing from personal experiences do not equate proof (which is why they're usually accompanied by things like "in my experience").
There is no such thing as proof in life. [...]
There is a significant difference between generalizing from one person's point of view and following the scientific method in order to reach reproducible results (even in soft sciences).
 I'd like to see such a feature implemented in D one day, but 
 I doubt it will for whatever reasons. Luckily D is powerful 
 enough to still get at a solid solution.. unlike some 
 languages, and I think that is what most of us here realize 
 about D and why we even bother with it.
Well, so far the (singular) reason is that nobody that wants it in the language has invested the time to write a DIP :p
Yep. I guess the problem with the D community is that there are no real "champions" of it.
There are for the specific points that interest them. Walter currently pushes escape analysis (see his DConf2017 talk about how DIP1000 improves the situation there by a lot), Andrei pushed std.experimental.allocator, which is still being improved to reach maturity. We also have quite a few people who have championed DIPs in the last months (one I especially care about is DIP1009 btw).
 Many of the contributors here do not make money off of D in any 
 significant way and hence do it more as a hobby. So the 
 practical side of things prevent D from really accomplishing 
 great heights(at this point).
I actually disagree on the conclusion. From my experience, things primarily done for money (especially in the software business) are pretty much always done to the worst possible quality you can get away with.
 What we know for sure, if D does progress at a specific "rate", 
 it will be overtaken by other languages and eventually die out.
I don't see this happening anytime soon, as all other native system PLs are so far behind D in terms of readability and maintainability that it's not even funny anymore. Regardless, should that unlikely scenario happen, that's okay, too, because in order for them to actually overtake D, they'll have to incorporate the things from D I like (otherwise they haven't actually overtaking it in terms of PL design).
 This is a fact, as it will happen(everything dies that lives, 
 another "over generalization" born in circumstantial evidence 
 but that everyone should be able to agree on...). D has to keep 
 up with the Kardashians if it want's to be cool... 
 unfortunately.
I can't speak for anyone else, but I'm not using D because I think D wants to be cool (I don't think it does), I use it because more often than not it's the best tool available and I believe the people who designed it actually cared about its quality.
Sep 03
prev sibling parent crimaniak <crimaniak gmail.com> writes:
On Wednesday, 30 August 2017 at 20:47:12 UTC, EntangledQuanta 
wrote:

 interface I
 {	
 	void Go(T)(S!T s);

 	static final I New()
 	{
 		return new C();
 	}
 }

 abstract class A : I
 {
 	
 }


 class C : A
 {
 	void Go(T)(S!T s)
 	{
 		
 	}
 }
 This is a blocker for me! Can someone open a ticket?
Judging by the length of the thread that I did not read, the real problem was not spotted, otherwise, it would be shorter. The problem is called "virtual method in the interface" anti-pattern. Just never do that, and life will be easier. In this case, I recommend to move Go to A and make it just dispatcher for specialized private non-templated virtual functions. You don't need all this mess with string templates for it.
Sep 04