www.digitalmars.com         C & C++   DMDScript  

digitalmars.D - static map as a type function

reply Stefan Koch <uplink.coder googlemail.com> writes:
Hi guys,

So, today I wanted to implement handling of typesafe variadic 
array for type function (alias[] ...).
When writing a test for it I found out that I already did (on the 
talias_master branch which is different from talias_safe).

So here's the static map as a type function.
This also shows of interplay between templates and type functions.
That should be surprising since type functions are just regular 
functions that happen to accept types as arguments ;)

---

struct DummyType {} // just a dummy to get the right inference

auto getUDAs(alias T) { return __traits(getAttributes, T); } // 
needed because that's the shortest thing that'll return alias[]

alias alias_array = typeof(getUDAs(DummyType));

auto static_map_tf(alias F)(alias_array types ...)
{
     typeof(F(DummyType))[] result;
     result.length = types.length;

     foreach(i, t;types)
     {
         result[i] = F(t);
     }

     return result;
}

size_t sizeOf(alias t)
{
     return t.sizeof;
}

alias Int = int;
alias Ushort = ushort; // we need these aliases because the 
parser won't allow us to call a function with a reserved type 
identifier.

static assert(static_map_tf!sizeOf(Int, Ushort) == [4, 2]);

---

This code has been working quietly for a while.
So go and download 
https://github.com/UplinkCoder/dmd/tree/talias_master
Build your own compiler.
And play with type functions.

Cheers,

Stefan
Sep 23 2020
next sibling parent Stefan Koch <uplink.coder googlemail.com> writes:
On Wednesday, 23 September 2020 at 09:46:54 UTC, Stefan Koch 
wrote:

 So go and download 
 https://github.com/UplinkCoder/dmd/tree/talias_master
 Build your own compiler.
 And play with type functions.
note. type functions are still an experimental feature, certain actions will cause the interaction with d-runtime to fail for example setting the length of an alias[]. So do expect errors near the end of compilation, I am fixing those as fast as my time allows. Also the -sktf switch has to be thrown for type functions to be active since they require a semantic rule to be circumvented. The rule being "You can't call a function with types." That issue is actually a bit tricky to address because of the implementation in dmd. I can't always know which function a call resolves to before deciding whether to allow types (iff the function is a type function) or not.
Sep 23 2020
prev sibling next sibling parent reply Stefan Koch <uplink.coder googlemail.com> writes:
On Wednesday, 23 September 2020 at 09:46:54 UTC, Stefan Koch 
wrote:

 That should be surprising since type functions are just regular 
 functions that happen to accept types as arguments ;)
I meant that should _not_ be surprising.
Sep 23 2020
parent reply Per =?UTF-8?B?Tm9yZGzDtnc=?= <per.nordlow gmail.com> writes:
On Wednesday, 23 September 2020 at 10:51:41 UTC, Stefan Koch 
wrote:
 On Wednesday, 23 September 2020 at 09:46:54 UTC, Stefan Koch 
 wrote:

 That should be surprising since type functions are just 
 regular functions that happen to accept types as arguments ;)
I meant that should _not_ be surprising.
Great work. Has Andrei given any advice on how to proceed regarding dip, reviews and acceptance? I presume this will get merged behind a -preview if it gets merged.
Sep 23 2020
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 9/23/2020 4:49 AM, Per Nordlöw wrote:
 Great work. Has Andrei given any advice on how to proceed regarding dip,
reviews 
 and acceptance? I presume this will get merged behind a -preview if it gets
merged.
The other way to do it is just have the compiler recognize recursive templates and implement them directly. This would provide the speedup, while not adding new features or requiring any user code changes - it'll just run faster.
Sep 23 2020
next sibling parent reply Bruce Carneal <bcarneal gmail.com> writes:
On Wednesday, 23 September 2020 at 22:36:55 UTC, Walter Bright 
wrote:
 On 9/23/2020 4:49 AM, Per Nordlöw wrote:
 Great work. Has Andrei given any advice on how to proceed 
 regarding dip, reviews and acceptance? I presume this will get 
 merged behind a -preview if it gets merged.
The other way to do it is just have the compiler recognize recursive templates and implement them directly. This would provide the speedup, while not adding new features or requiring any user code changes - it'll just run faster.
I see a few reasons to prefer type functions, wherever applicable (they cant do everything): 1) type functions admit a simpler/bounded compiler implementation 2) type functions admit simpler meta programs and the related 3) type functions should be easier to debug, eager rather than lazy/latent compile time errors for one thing 4) type functions exhibit better locality than templates 5) to achieve type function like simplicity/debuggability with templates you need to rely more heavily on "best practices"
Sep 23 2020
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 9/23/2020 4:59 PM, Bruce Carneal wrote:
 On Wednesday, 23 September 2020 at 22:36:55 UTC, Walter Bright wrote:
 The other way to do it is just have the compiler recognize recursive templates 
 and implement them directly. This would provide the speedup, while not adding 
 new features or requiring any user code changes - it'll just run faster.
I see a few reasons to prefer type functions, wherever applicable (they cant do everything): 1) type functions admit a simpler/bounded compiler implementation 2) type functions admit simpler meta programs and the related 3) type functions should be easier to debug, eager rather than lazy/latent compile time errors for one thing 4) type functions exhibit better locality than templates 5) to achieve type function like simplicity/debuggability with templates you need to rely more heavily on "best practices"
I don't really understand these points. Using the existing recursive template calls is well-known and well-understood functional style programming. It even aesthetically looks and acts like function calls, except with a bang (!). Making existing constructs work better is better than adding an ever-expanding list of new constructs.
Sep 23 2020
next sibling parent Bruce Carneal <bcarneal gmail.com> writes:
On Thursday, 24 September 2020 at 00:44:21 UTC, Walter Bright 
wrote:
 On 9/23/2020 4:59 PM, Bruce Carneal wrote:
 On Wednesday, 23 September 2020 at 22:36:55 UTC, Walter Bright 
 wrote:
 The other way to do it is just have the compiler recognize 
 recursive templates and implement them directly. This would 
 provide the speedup, while not adding new features or 
 requiring any user code changes - it'll just run faster.
I see a few reasons to prefer type functions, wherever applicable (they cant do everything): 1) type functions admit a simpler/bounded compiler implementation 2) type functions admit simpler meta programs and the related 3) type functions should be easier to debug, eager rather than lazy/latent compile time errors for one thing 4) type functions exhibit better locality than templates 5) to achieve type function like simplicity/debuggability with templates you need to rely more heavily on "best practices"
I don't really understand these points. Using the existing recursive template calls is well-known and well-understood functional style programming. It even aesthetically looks and acts like function calls, except with a bang (!). Making existing constructs work better is better than adding an ever-expanding list of new constructs.
For me, the overriding reason for adding functionality (type functions) is to achieve a net simplification. A compounding-over-time simplification that reduces the number of compiler bugs experienced and, much more importantly, the number of bugs experienced by meta progammers generally. A few questions: 1) Is iteration sometimes simpler than recursion? 2) Is left leaning code (early exit) sometimes simpler than single exit (or nested static ifs)? 3) As experienced today, are CTFE functions sometimes easier to debug than templates? Are the almost always at least as easy? 4) Are any companies with large code bases hitting a wall with non-templates? 5) Are programmers more adept at debugging functions or pattern matchers? 6) Will new programmers correctly adopt templates more quickly than type functions? 7) Is the template subsystem (in the compiler) already one of the least stable, most fragile, bug-fraught subsystems within the compiler? 8) In their fully general form, do templates admit an implementation that is as bounded, as likely to be correct, as convergent, as type functions? 9) Are templates better at defining interfaces than CTFE functions? 10) Is it easier to bound the sphere of dependency with templates or CTFE functions? Templates can do everything (Turing complete) so why don't we use them for everything? Because they're not the simplest approach to everything.
Sep 23 2020
prev sibling parent reply Jackel <jackel894_394 gmail.com> writes:
On Thursday, 24 September 2020 at 00:44:21 UTC, Walter Bright 
wrote:
 On 9/23/2020 4:59 PM, Bruce Carneal wrote:
 On Wednesday, 23 September 2020 at 22:36:55 UTC, Walter Bright 
 wrote:
 The other way to do it is just have the compiler recognize 
 recursive templates and implement them directly. This would 
 provide the speedup, while not adding new features or 
 requiring any user code changes - it'll just run faster.
I see a few reasons to prefer type functions, wherever applicable (they cant do everything): 1) type functions admit a simpler/bounded compiler implementation 2) type functions admit simpler meta programs and the related 3) type functions should be easier to debug, eager rather than lazy/latent compile time errors for one thing 4) type functions exhibit better locality than templates 5) to achieve type function like simplicity/debuggability with templates you need to rely more heavily on "best practices"
I don't really understand these points. Using the existing recursive template calls is well-known and well-understood functional style programming. It even aesthetically looks and acts like function calls, except with a bang (!). Making existing constructs work better is better than adding an ever-expanding list of new constructs.
It's basically what D is to C++ template metaprogramming. Compare the staticMap implementation with a type function implementation and it's pretty clear which one is more readable and easier to maintain. The current D implementation will also create a bunch of template bloat with numerous instantiations that aren't actually required. StaticMap, and many other templates like it also need a workaround to reduce the number of instances, otherwise they fail. Which isn't as intuitive and not something someone will really know to do. Behold: template staticMap(alias F, T...) { static if (T.length == 0) { alias staticMap = AliasSeq!(); } else static if (T.length == 1) { alias staticMap = AliasSeq!(F!(T[0])); } /* Cases 2 to 8 improve compile performance by reducing * the number of recursive instantiations of staticMap */ else static if (T.length == 2) { alias staticMap = AliasSeq!(F!(T[0]), F!(T[1])); } else static if (T.length == 3) { alias staticMap = AliasSeq!(F!(T[0]), F!(T[1]), F!(T[2])); } else static if (T.length == 4) { alias staticMap = AliasSeq!(F!(T[0]), F!(T[1]), F!(T[2]), F!(T[3])); } else static if (T.length == 5) { alias staticMap = AliasSeq!(F!(T[0]), F!(T[1]), F!(T[2]), F!(T[3]), F!(T[4])); } else static if (T.length == 6) { alias staticMap = AliasSeq!(F!(T[0]), F!(T[1]), F!(T[2]), F!(T[3]), F!(T[4]), F!(T[5])); } else static if (T.length == 7) { alias staticMap = AliasSeq!(F!(T[0]), F!(T[1]), F!(T[2]), F!(T[3]), F!(T[4]), F!(T[5]), F!(T[6])); } else static if (T.length == 8) { alias staticMap = AliasSeq!(F!(T[0]), F!(T[1]), F!(T[2]), F!(T[3]), F!(T[4]), F!(T[5]), F!(T[6]), F!(T[7])); } else { /* While: * alias staticMap = AliasSeq!(F!T[0], staticMap!(F, T[1 .. $])); * does fewer template instantiations, the compiler implements * recursive template instantiations with recursion, and long * sequences overflow the compiler's stack. * The divide-and-conquer approach uses log_2(n) stack frames. */ alias staticMap = AliasSeq!( staticMap!(F, T[ 0 .. $/2]), staticMap!(F, T[$/2 .. $ ])); } }
Sep 23 2020
parent reply Paul Backus <snarwin gmail.com> writes:
On Thursday, 24 September 2020 at 02:09:31 UTC, Jackel wrote:
 It's basically what D is to C++ template metaprogramming.

 Compare the staticMap implementation with a type function 
 implementation and it's pretty clear which one is more readable 
 and easier to maintain. The current D implementation will also 
 create a bunch of template bloat with numerous instantiations 
 that aren't actually required.

 StaticMap, and many other templates like it also need a 
 workaround to reduce the number of instances, otherwise they 
 fail. Which isn't as intuitive and not something someone will 
 really know to do.
On the other hand, if you can fix recursive template bloat somehow (tail-call elimination?), you get good performance *and* nice code. It's the best of both worlds.
Sep 23 2020
next sibling parent reply Stefan Koch <uplink.coder googlemail.com> writes:
On Thursday, 24 September 2020 at 02:42:00 UTC, Paul Backus wrote:
 On the other hand, if you can fix recursive template bloat 
 somehow (tail-call elimination?), you get good performance 
 *and* nice code. It's the best of both worlds.
good performance yes. But where is the nice code, when you are confined to recursive templates?
Sep 23 2020
next sibling parent Paul Backus <snarwin gmail.com> writes:
On Thursday, 24 September 2020 at 03:04:46 UTC, Stefan Koch wrote:
 On Thursday, 24 September 2020 at 02:42:00 UTC, Paul Backus 
 wrote:
 On the other hand, if you can fix recursive template bloat 
 somehow (tail-call elimination?), you get good performance 
 *and* nice code. It's the best of both worlds.
good performance yes. But where is the nice code, when you are confined to recursive templates?
I suppose whether you think recursion is pleasant or unpleasant is a matter of taste. Personally, I rather like it. :)
Sep 23 2020
prev sibling parent reply "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Thu, Sep 24, 2020 at 03:04:46AM +0000, Stefan Koch via Digitalmars-d wrote:
 On Thursday, 24 September 2020 at 02:42:00 UTC, Paul Backus wrote:
 
 On the other hand, if you can fix recursive template bloat somehow
 (tail-call elimination?), you get good performance *and* nice code.
 It's the best of both worlds.
good performance yes. But where is the nice code, when you are confined to recursive templates?
What we need is imperative syntactic sugar that lowers to recursive templates under the hood. (After all, in theory, every Turing-complete language is just syntactic sugar over lambda calculus. :-P) That, plus implement template optimization schemes to lower the cost of recursive templates. The template analogue of tail-call optimization is a good first step, for example. Another good direction to investigate is instantiation elimination: i.e., defer the actual instantiation of a template (with its associated costs of copying the AST and substituting arguments, generating symbols, etc.) until it's determined to be necessary. T -- Many open minds should be closed for repairs. -- K5 user
Sep 23 2020
next sibling parent Bruce Carneal <bcarneal gmail.com> writes:
On Thursday, 24 September 2020 at 05:38:07 UTC, H. S. Teoh wrote:
 On Thu, Sep 24, 2020 at 03:04:46AM +0000, Stefan Koch via 
 Digitalmars-d wrote:
 On Thursday, 24 September 2020 at 02:42:00 UTC, Paul Backus 
 wrote:
 
 On the other hand, if you can fix recursive template bloat 
 somehow (tail-call elimination?), you get good performance 
 *and* nice code. It's the best of both worlds.
good performance yes. But where is the nice code, when you are confined to recursive templates?
What we need is imperative syntactic sugar that lowers to recursive templates under the hood. (After all, in theory, every Turing-complete language is just syntactic sugar over lambda calculus. :-P) That, plus implement template optimization schemes to lower the cost of recursive templates. The template analogue of tail-call optimization is a good first step, for example. Another good direction to investigate is instantiation elimination: i.e., defer the actual instantiation of a template (with its associated costs of copying the AST and substituting arguments, generating symbols, etc.) until it's determined to be necessary.
How do we get past the, rather severe, constraints Stefan outlined wrt tail recursion? Also, why "lower" to recursion when iteration is directly available? Are you proposing an improvement for templates that can not be represented in type functions?
 T
Sep 23 2020
prev sibling parent reply Stefan Koch <uplink.coder googlemail.com> writes:
On Thursday, 24 September 2020 at 05:38:07 UTC, H. S. Teoh wrote:

 What we need is imperative syntactic sugar that lowers to 
 recursive templates under the hood.  (After all, in theory, 
 every Turing-complete language is just syntactic sugar over 
 lambda calculus. :-P)

 That, plus implement template optimization schemes to lower the 
 cost of recursive templates.  The template analogue of 
 tail-call optimization is a good first step, for example.  
 Another good direction to investigate is instantiation 
 elimination: i.e., defer the actual instantiation of a template 
 (with its associated costs of copying the AST and substituting 
 arguments, generating symbols, etc.) until it's determined to 
 be necessary.


 T
I am not sure if what you are saying is actually serious or not ... You're suggesting to provide in iterative syntax for the programmer. Rewriting that internally into recursive templates. And then try to recover the iterative form to get the speed. Did I paraphrase correctly?
Sep 24 2020
parent reply "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Thu, Sep 24, 2020 at 03:08:05PM +0000, Stefan Koch via Digitalmars-d wrote:
 On Thursday, 24 September 2020 at 05:38:07 UTC, H. S. Teoh wrote:
 
 What we need is imperative syntactic sugar that lowers to recursive
 templates under the hood.  (After all, in theory, every
 Turing-complete language is just syntactic sugar over lambda
 calculus. :-P)
 
 That, plus implement template optimization schemes to lower the cost
 of recursive templates.  The template analogue of tail-call
 optimization is a good first step, for example.  Another good
 direction to investigate is instantiation elimination: i.e., defer
 the actual instantiation of a template (with its associated costs of
 copying the AST and substituting arguments, generating symbols,
 etc.) until it's determined to be necessary.
[...]
 I am not sure if what you are saying is actually serious or not ...
 
 You're suggesting to provide in iterative syntax for the programmer.
 Rewriting that internally into recursive templates.
 And then try to recover the iterative form to get the speed.
 
 Did I paraphrase correctly?
OK, it sounds ridiculous when you put it that way. :-D But what I had in mind was what Walter said about leveraging the existing language instead of adding new features. My thought was to introduce new syntax that maps to the current template implementation (recursion, etc.): new syntax won't significantly add to the weight of the language since you're not defining new semantics. This solves the writability / readability problem. Then the next step is to improve the implementation, so that it benefits both existing template code and, by extension, the new syntax. This would solve the performance issue, or at least address it. Or maybe a more sensible approach is to implement a template optimizer with the current language, that optimizes certain patterns into iterative code, then at a later point introduce primitives that let you access this iterative code directly. T -- A linguistics professor was lecturing to his class one day. "In English," he said, "A double negative forms a positive. In some languages, though, such as Russian, a double negative is still a negative. However, there is no language wherein a double positive can form a negative." A voice from the back of the room piped up, "Yeah, yeah."
Sep 24 2020
parent Stefan Koch <uplink.coder googlemail.com> writes:
On Thursday, 24 September 2020 at 15:34:06 UTC, H. S. Teoh wrote:

 what I had in mind was what Walter said about leveraging the 
 existing language instead of adding new features.  My thought 
 was to introduce new syntax that maps to the current template 
 implementation (recursion, etc.):
That means changing the langauge.
 new syntax won't significantly add to the weight of the 
 language since you're not defining new semantics.  This solves 
 the writability / readability problem.
It's debatable of that won't add to the weight of the language.
  Then the next step is to improve the implementation, so that 
 it benefits both existing template code and, by extension, the 
 new syntax. This would solve the performance issue, or at least 
 address it.
Sure, if there is any theoretically sound way to extract a monomorphic iterative form out of a polymorphic recursion in O(1). And when you have found it you _just_ have to implement it, correctly.
 Or maybe a more sensible approach is to implement a template 
 optimizer with the current language, that optimizes certain 
 patterns into iterative code, then at a later point introduce 
 primitives that let you access this iterative code directly.
So you are saying first introduce type functions to be used under the hood an later on make them available?
Sep 24 2020
prev sibling next sibling parent reply Steven Schveighoffer <schveiguy gmail.com> writes:
On 9/23/20 10:42 PM, Paul Backus wrote:
 On Thursday, 24 September 2020 at 02:09:31 UTC, Jackel wrote:
 It's basically what D is to C++ template metaprogramming.

 Compare the staticMap implementation with a type function 
 implementation and it's pretty clear which one is more readable and 
 easier to maintain. The current D implementation will also create a 
 bunch of template bloat with numerous instantiations that aren't 
 actually required.

 StaticMap, and many other templates like it also need a workaround to 
 reduce the number of instances, otherwise they fail. Which isn't as 
 intuitive and not something someone will really know to do.
On the other hand, if you can fix recursive template bloat somehow (tail-call elimination?), you get good performance *and* nice code. It's the best of both worlds.
The recursive version is not nice code. It's hard to understand and difficult to follow. It's main advantage is that because it's essentially a functional language, and all templates are cached, in certain cases, you can see a performance gain because the compiler can avoid actually redoing what it did. In practice, for things like staticMap, this doesn't ever pan out. I don't see how you solve the tail recursion thing anyway -- each template instance cannot rely on the work that has already been done, because of static if. The static map type function is a loop over an array. Easy to understand, easy to write. It's actually boring code. Static map shouldn't be interesting code at all. -Steve
Sep 23 2020
next sibling parent Bruce Carneal <bcarneal gmail.com> writes:
On Thursday, 24 September 2020 at 03:17:44 UTC, Steven 
Schveighoffer wrote:
 On 9/23/20 10:42 PM, Paul Backus wrote:
 On Thursday, 24 September 2020 at 02:09:31 UTC, Jackel wrote:
 It's basically what D is to C++ template metaprogramming.

 Compare the staticMap implementation with a type function 
 implementation and it's pretty clear which one is more 
 readable and easier to maintain. The current D implementation 
 will also create a bunch of template bloat with numerous 
 instantiations that aren't actually required.

 StaticMap, and many other templates like it also need a 
 workaround to reduce the number of instances, otherwise they 
 fail. Which isn't as intuitive and not something someone will 
 really know to do.
On the other hand, if you can fix recursive template bloat somehow (tail-call elimination?), you get good performance *and* nice code. It's the best of both worlds.
The recursive version is not nice code. It's hard to understand and difficult to follow. It's main advantage is that because it's essentially a functional language, and all templates are cached, in certain cases, you can see a performance gain because the compiler can avoid actually redoing what it did. In practice, for things like staticMap, this doesn't ever pan out. I don't see how you solve the tail recursion thing anyway -- each template instance cannot rely on the work that has already been done, because of static if.
Yes. Didn't want to speak to these points first, thanks for speaking up Steve, but if I'm channeling Stefan correctly template memoization/recursion is pretty close to useless in practice. Maybe there's some magic breakthrough possible but from what I've been able to pick up, it's not looking good, even theoretically. Instead we're looking at a never ending series of hacks that almost work, sometimes, and break opaquely down the road.
 The static map type function is a loop over an array. Easy to 
 understand, easy to write. It's actually boring code. Static 
 map shouldn't be interesting code at all.
Boring is great.
Sep 23 2020
prev sibling parent reply Paul Backus <snarwin gmail.com> writes:
On Thursday, 24 September 2020 at 03:17:44 UTC, Steven 
Schveighoffer wrote:
 The recursive version is not nice code. It's hard to understand 
 and difficult to follow. It's main advantage is that because 
 it's essentially a functional language, and all templates are 
 cached, in certain cases, you can see a performance gain 
 because the compiler can avoid actually redoing what it did. In 
 practice, for things like staticMap, this doesn't ever pan out.

 I don't see how you solve the tail recursion thing anyway -- 
 each template instance cannot rely on the work that has already 
 been done, because of static if.

 The static map type function is a loop over an array. Easy to 
 understand, easy to write. It's actually boring code. Static 
 map shouldn't be interesting code at all.

 -Steve
I feel the same way about the naive recursive version: template staticMap!(alias F, Args...) { static if (Args.length == 0) alias staticMap = AliasSeq!(); else alias staticMap = AliasSeq!(F!(Args[0]), staticMap!(F, Args[1 .. $])); } One base case, one recursive case, one line for each. The code practically writes itself. How could it be any clearer or more obvious? Making it tail-recursive takes away some of the elegance, but doesn't make it any more "interesting": template staticMap(alias F, Args...) { template loop(size_t i, Args...) { static if (start == Args.length) alias loop = Args; else alias loop = loop!(i + 1, Args[0 .. i], F!(Args[i]), Args[i+1 .. $]); } alias staticMap = loop!(0, Args); } If you have spent any time at all writing code in a functional language with tail-call elimination, this pattern will be immediately familiar to you. There's nothing about it that's hard to understand, or difficult to follow. It's completely textbook. Of course, for someone who lacks that background, it might very well look bewildering--in the same way that, say, the Visitor pattern might look bewildering to someone unfamiliar with the idioms of OOP. Does that mean it's a bad pattern? Or does it just mean it's a pattern they haven't learned yet?
Sep 23 2020
next sibling parent reply Stefan Koch <uplink.coder googlemail.com> writes:
On Thursday, 24 September 2020 at 03:51:55 UTC, Paul Backus wrote:
 template staticMap!(alias F, Args...) {
     static if (Args.length == 0)
         alias staticMap = AliasSeq!();
     else
         alias staticMap = AliasSeq!(F!(Args[0]), staticMap!(F, 
 Args[1 .. $]));
 }
This code only works because tuples auto expand! Without that piece of knowledge the template can't be understood. And even then I find it a stretch to say this immediately looks like a loop over a parameter tuple.
Sep 23 2020
parent reply Paul Backus <snarwin gmail.com> writes:
On Thursday, 24 September 2020 at 04:07:40 UTC, Stefan Koch wrote:
 On Thursday, 24 September 2020 at 03:51:55 UTC, Paul Backus 
 wrote:
 template staticMap!(alias F, Args...) {
     static if (Args.length == 0)
         alias staticMap = AliasSeq!();
     else
         alias staticMap = AliasSeq!(F!(Args[0]), staticMap!(F, 
 Args[1 .. $]));
 }
This code only works because tuples auto expand! Without that piece of knowledge the template can't be understood. And even then I find it a stretch to say this immediately looks like a loop over a parameter tuple.
I agree that tuple auto-expansion is weird, but what are we supposed to do about it, just never use tuples? It's not like the type function version gets rid of them either--the alias[] only exists inside the function, so you still have to deal with tuples if you want to use the result. As far as "not looking like a loop" goes--of course it doesn't! It's not a loop. It's a recursive traversal of a list, same as you'd write in any functional language.
Sep 23 2020
parent Stefan Koch <uplink.coder googlemail.com> writes:
On Thursday, 24 September 2020 at 04:39:43 UTC, Paul Backus wrote:
 I agree that tuple auto-expansion is weird, but what are we 
 supposed to do about it, just never use tuples? It's not like 
 the type function version gets rid of them either--the alias[] 
 only exists inside the function, so you still have to deal with 
 tuples if you want to use the result.
nope. staticMap_tf!sizeOf returns a regular size_t[]. not a tuple.
Sep 23 2020
prev sibling next sibling parent Bruce Carneal <bcarneal gmail.com> writes:
On Thursday, 24 September 2020 at 03:51:55 UTC, Paul Backus wrote:
 On Thursday, 24 September 2020 at 03:17:44 UTC, Steven 
 Schveighoffer wrote:
 [...]
I feel the same way about the naive recursive version: template staticMap!(alias F, Args...) { static if (Args.length == 0) alias staticMap = AliasSeq!(); else alias staticMap = AliasSeq!(F!(Args[0]), staticMap!(F, Args[1 .. $])); } One base case, one recursive case, one line for each. The code practically writes itself. How could it be any clearer or more obvious? Making it tail-recursive takes away some of the elegance, but doesn't make it any more "interesting": template staticMap(alias F, Args...) { template loop(size_t i, Args...) { static if (start == Args.length) alias loop = Args; else alias loop = loop!(i + 1, Args[0 .. i], F!(Args[i]), Args[i+1 .. $]); } alias staticMap = loop!(0, Args); } If you have spent any time at all writing code in a functional language with tail-call elimination, this pattern will be immediately familiar to you. There's nothing about it that's hard to understand, or difficult to follow. It's completely textbook. Of course, for someone who lacks that background, it might very well look bewildering--in the same way that, say, the Visitor pattern might look bewildering to someone unfamiliar with the idioms of OOP. Does that mean it's a bad pattern? Or does it just mean it's a pattern they haven't learned yet?
Recursion is great if you're actually partitioning a space, if you need backtracking, if ... Experienced programmers will reach for it in those situations. Recursion is somewhat less awe inspiring if you're using it as a loop. Regardless, unless I'm missing something, the recursive form you put forward doesn't meet Stefan's optimizability criteria. Recursion aficionados, along with the "use a loop if it fits" crowd, should both see faster code when type functions apply.
Sep 23 2020
prev sibling next sibling parent reply Timon Gehr <timon.gehr gmx.ch> writes:
On 24.09.20 05:51, Paul Backus wrote:
 
 I feel the same way about the naive recursive version:
 
 template staticMap!(alias F, Args...) {
      static if (Args.length == 0)
          alias staticMap = AliasSeq!();
      else
          alias staticMap = AliasSeq!(F!(Args[0]), staticMap!(F, Args[1 
 .. $]));
 }
 
 One base case, one recursive case, one line for each. The code 
 practically writes itself. How could it be any clearer or more obvious?
 
 Making it tail-recursive takes away some of the elegance, but doesn't 
 make it any more "interesting":
 
 template staticMap(alias F, Args...) {
      template loop(size_t i, Args...) {
          static if (start == Args.length)
              alias loop = Args;
          else
              alias loop = loop!(i + 1, Args[0 .. i], F!(Args[i]), 
 Args[i+1 .. $]);
      }
      alias staticMap = loop!(0, Args);
 }
 
 If you have spent any time at all writing code in a functional language 
 with tail-call elimination, this pattern will be immediately familiar to 
 you. There's nothing about it that's hard to understand, or difficult to 
 follow. It's completely textbook.
 ...
True, only a novice would use explicit recursion. map=(`foldr`[]).((:).)
 Of course, for someone who lacks that background, it might very well 
 look bewildering--in the same way that, say, the Visitor pattern might 
 look bewildering to someone unfamiliar with the idioms of OOP. Does that 
 mean it's a bad pattern? Or does it just mean it's a pattern they 
 haven't learned yet?
It means your programming language lacks features to write the code in a better way. Note that D templates form a very ugly programming language with a semantics that is hard to implement efficiently.
Sep 24 2020
parent Stefan Koch <uplink.coder googlemail.com> writes:
On Thursday, 24 September 2020 at 10:08:30 UTC, Timon Gehr wrote:
 
 It means your programming language lacks features to write the 
 code in a better way. Note that D templates form a very ugly 
 programming language with a semantics that is hard to implement 
 efficiently.
Hello Timon, I didn't notice your reply, when you gave it. Thanks for expressing the concern so clearly. Last time we talked about type functions on here, you said they'd have to be able to replace templates in all circumstances, (i.e. be able to actually create new types) do you think so still? Greetings, Stefan
Sep 25 2020
prev sibling parent reply Steven Schveighoffer <schveiguy gmail.com> writes:
On 9/23/20 11:51 PM, Paul Backus wrote:
 On Thursday, 24 September 2020 at 03:17:44 UTC, Steven Schveighoffer wrote:
 The recursive version is not nice code. It's hard to understand and 
 difficult to follow. It's main advantage is that because it's 
 essentially a functional language, and all templates are cached, in 
 certain cases, you can see a performance gain because the compiler can 
 avoid actually redoing what it did. In practice, for things like 
 staticMap, this doesn't ever pan out.

 I don't see how you solve the tail recursion thing anyway -- each 
 template instance cannot rely on the work that has already been done, 
 because of static if.

 The static map type function is a loop over an array. Easy to 
 understand, easy to write. It's actually boring code. Static map 
 shouldn't be interesting code at all.
I feel the same way about the naive recursive version: template staticMap!(alias F, Args...) {     static if (Args.length == 0)         alias staticMap = AliasSeq!();     else         alias staticMap = AliasSeq!(F!(Args[0]), staticMap!(F, Args[1 .. $])); } One base case, one recursive case, one line for each. The code practically writes itself. How could it be any clearer or more obvious?
It fails to explicitly say what the resulting size of the AliasSeq will be, unlike the type function code (result.length = types.length) The transformation call to F is buried in the logic to construct the result. Compare to: result[i] = F(t); One has to mentally work through how the recursion relationship works to ensure it terminates properly, I have to read every line to see how the thing plays out. Vs. reading each line of the typefunction code and deciding "yes, this does what it's supposed to". Yes, the code writes itself. Reading and understanding it isn't as straightforward. BTW, I don't see how tail-recursion can be done still. Even with your tail-recursive version, you still have to rebuild each instance of the template, because you don't know if it's going to generate different code via static if. Whereas runtime tail recursion builds one version of the function, and just jumps back into the same function with modified input.
 Of course, for someone who lacks that background, it might very well 
 look bewildering--in the same way that, say, the Visitor pattern might 
 look bewildering to someone unfamiliar with the idioms of OOP. Does that 
 mean it's a bad pattern? Or does it just mean it's a pattern they 
 haven't learned yet?
Not bewildering, but requires more thought to understand. The array version requires little thought, almost every line is independent, and does not require the context of the whole construct to verify what it does or that it is correct. It also helps that the "simple naive" version of the typefunction code is the best version. You don't have to write it in a special way to make the compiler perform well. One of the reasons D's code generation capabilities are so approachable vs. C++ is because it does not need to be written in functional style. You can write code that does it exactly how you would do it if you were doing it by hand. -Steve
Sep 24 2020
parent Paul Backus <snarwin gmail.com> writes:
On Thursday, 24 September 2020 at 12:39:57 UTC, Steven 
Schveighoffer wrote:
 BTW, I don't see how tail-recursion can be done still. Even 
 with your tail-recursive version, you still have to rebuild 
 each instance of the template, because you don't know if it's 
 going to generate different code via static if. Whereas runtime 
 tail recursion builds one version of the function, and just 
 jumps back into the same function with modified input.
You have to build N versions, but (in principle) you don't have to keep them all around in memory--only the last one is necessary. So you still save on resources compared to naive template recursion, although not as much as you would with type functions.
 It also helps that the "simple naive" version of the 
 typefunction code is the best version. You don't have to write 
 it in a special way to make the compiler perform well.
Definitely a valid point. Regardless of one's aesthetic preferences, recursive templates are not a "pit of success."
Sep 24 2020
prev sibling parent Bruce Carneal <bcarneal gmail.com> writes:
On Thursday, 24 September 2020 at 02:42:00 UTC, Paul Backus wrote:
 On Thursday, 24 September 2020 at 02:09:31 UTC, Jackel wrote:
 It's basically what D is to C++ template metaprogramming.

 Compare the staticMap implementation with a type function 
 implementation and it's pretty clear which one is more 
 readable and easier to maintain. The current D implementation 
 will also create a bunch of template bloat with numerous 
 instantiations that aren't actually required.

 StaticMap, and many other templates like it also need a 
 workaround to reduce the number of instances, otherwise they 
 fail. Which isn't as intuitive and not something someone will 
 really know to do.
On the other hand, if you can fix recursive template bloat somehow (tail-call elimination?), you get good performance *and* nice code. It's the best of both worlds.
It would be a very welcome performance improvement for current template programming, yes but, for me, type functions are about a lot more than the compile time performance improvement they would bring. Making a big chunk of meta programming easier to understand, easier to debug, easier to compose, is the big payoff. "It's just like your other programming". And yes, it would be nice if we could get the performance associated with iterative implementations guaranteed rather than hope the compiler can find actual recursion within the unrestricted polymorphic context of templates.
Sep 23 2020
prev sibling parent Stefan Koch <uplink.coder googlemail.com> writes:
On Wednesday, 23 September 2020 at 22:36:55 UTC, Walter Bright 
wrote:
 On 9/23/2020 4:49 AM, Per Nordlöw wrote:
 Great work. Has Andrei given any advice on how to proceed 
 regarding dip, reviews and acceptance? I presume this will get 
 merged behind a -preview if it gets merged.
The other way to do it is just have the compiler recognize recursive templates and implement them directly. This would provide the speedup, while not adding new features or requiring any user code changes - it'll just run faster.
The problem here is just. As you might be aware, I have been looking into this problem for a while. The issue is that templates are polymorphic, which means that static analysis of a template is not possible, in the general case. The case which I have identified where analysis of a template is possible is this one: - template does not use string mixin dependent on a parameter or outer template context. - template does not use static if's dependent on parameters or outer template context. - template does not use static foreach or foreach dependent on parameters or outer template context. that means no static if, no foreach, and no mixin. If you can write useful metaprogramming templates given those constraints then it's possible to optimize them.
Sep 23 2020
prev sibling next sibling parent reply Jacob Carlborg <doob me.com> writes:
On Wednesday, 23 September 2020 at 09:46:54 UTC, Stefan Koch 
wrote:

 static assert(static_map_tf!sizeOf(Int, Ushort) == [4, 2]);
Is it possible to use a lambda here? -- /Jacob Carlborg
Sep 23 2020
parent reply Stefan Koch <uplink.coder googlemail.com> writes:
On Wednesday, 23 September 2020 at 12:22:49 UTC, Jacob Carlborg 
wrote:
 On Wednesday, 23 September 2020 at 09:46:54 UTC, Stefan Koch 
 wrote:

 static assert(static_map_tf!sizeOf(Int, Ushort) == [4, 2]);
Is it possible to use a lambda here? -- /Jacob Carlborg
It should be in theory but I advise against it. Mixing lambdas and template alias parameters leads to instantiating different template instances although semantically they're the same.
Sep 23 2020
parent reply Steven Schveighoffer <schveiguy gmail.com> writes:
On 9/23/20 8:40 AM, Stefan Koch wrote:
 On Wednesday, 23 September 2020 at 12:22:49 UTC, Jacob Carlborg wrote:
 On Wednesday, 23 September 2020 at 09:46:54 UTC, Stefan Koch wrote:

 static assert(static_map_tf!sizeOf(Int, Ushort) == [4, 2]);
Is it possible to use a lambda here?
It should be in theory but I advise against it. Mixing lambdas and template alias parameters leads to instantiating different template instances although semantically they're the same.
what about: static_map_tf!((alias a) => a.sizeof)(...) This shouldn't be a template. -Steve
Sep 23 2020
parent reply Stefan Koch <uplink.coder googlemail.com> writes:
On Wednesday, 23 September 2020 at 13:19:34 UTC, Steven 
Schveighoffer wrote:
 On 9/23/20 8:40 AM, Stefan Koch wrote:
 On Wednesday, 23 September 2020 at 12:22:49 UTC, Jacob 
 Carlborg wrote:
 On Wednesday, 23 September 2020 at 09:46:54 UTC, Stefan Koch 
 wrote:

 static assert(static_map_tf!sizeOf(Int, Ushort) == [4, 2]);
Is it possible to use a lambda here?
It should be in theory but I advise against it. Mixing lambdas and template alias parameters leads to instantiating different template instances although semantically they're the same.
what about: static_map_tf!((alias a) => a.sizeof)(...) This shouldn't be a template. -Steve
with: pragma(msg, (static_map_tf!((alias a) => a.sizeof)(Int, Ushort) == [4, 2])); pragma(msg, (static_map_tf!((alias a) => a.sizeof)(Int, Ushort) == [4, 2])); -vtemplates reports: static_map_tf.d(12): vtemplate: 2 (2 unique) instantiation(s) of template `static_map_tf(alias F)(alias_array types...)` found with pragma(msg, (static_map_tf!((alias a) => a.sizeof)(Int, Ushort) == [4, 2])); pragma(msg, (static_map_tf!((alias a) => a.sizeof)(Int, Ushort) == [4, 2])); pragma(msg, (static_map_tf!((alias a) => a.sizeof)(Int, Ushort) == [4, 2])); static_map_tf.d(12): vtemplate: 3 (3 unique) instantiation(s) of template `static_map_tf(alias F)(alias_array types...)` found
Sep 23 2020
parent reply Steven Schveighoffer <schveiguy gmail.com> writes:
On 9/23/20 9:38 AM, Stefan Koch wrote:
 On Wednesday, 23 September 2020 at 13:19:34 UTC, Steven Schveighoffer 
 wrote:
 On 9/23/20 8:40 AM, Stefan Koch wrote:
 On Wednesday, 23 September 2020 at 12:22:49 UTC, Jacob Carlborg wrote:
 On Wednesday, 23 September 2020 at 09:46:54 UTC, Stefan Koch wrote:

 static assert(static_map_tf!sizeOf(Int, Ushort) == [4, 2]);
Is it possible to use a lambda here?
It should be in theory but I advise against it. Mixing lambdas and template alias parameters leads to instantiating different template instances although semantically they're the same.
what about: static_map_tf!((alias a) => a.sizeof)(...) This shouldn't be a template.
 static_map_tf.d(12): vtemplate: 3 (3 unique) instantiation(s) of 
 template `static_map_tf(alias F)(alias_array types...)` found
Ah ok. You meant the static map template is going to be different because the lambdas are considered different aliases. That's standard behavior for the current compiler too. I thought you meant that type functions have an extra problem with templates using lambdas. It makes me wonder -- can the compiler determine identical lambdas and reduce template instantiations? This was one of the problems with moving from string lambdas to actual lambdas. -Steve
Sep 23 2020
next sibling parent reply Meta <jared771 gmail.com> writes:
On Wednesday, 23 September 2020 at 14:05:50 UTC, Steven 
Schveighoffer wrote:
 On 9/23/20 9:38 AM, Stefan Koch wrote:
 static_map_tf.d(12): vtemplate: 3 (3 unique) instantiation(s) 
 of template `static_map_tf(alias F)(alias_array types...)` 
 found
Ah ok. You meant the static map template is going to be different because the lambdas are considered different aliases. That's standard behavior for the current compiler too. I thought you meant that type functions have an extra problem with templates using lambdas. It makes me wonder -- can the compiler determine identical lambdas and reduce template instantiations? This was one of the problems with moving from string lambdas to actual lambdas. -Steve
There was some limited support for lambda comparison added in 2.079: https://dlang.org/changelog/2.079.0.html#lambdacomp
Sep 23 2020
parent reply Steven Schveighoffer <schveiguy gmail.com> writes:
On 9/23/20 10:15 AM, Meta wrote:
 On Wednesday, 23 September 2020 at 14:05:50 UTC, Steven Schveighoffer 
 wrote:
 On 9/23/20 9:38 AM, Stefan Koch wrote:
 static_map_tf.d(12): vtemplate: 3 (3 unique) instantiation(s) of 
 template `static_map_tf(alias F)(alias_array types...)` found
Ah ok. You meant the static map template is going to be different because the lambdas are considered different aliases. That's standard behavior for the current compiler too. I thought you meant that type functions have an extra problem with templates using lambdas. It makes me wonder -- can the compiler determine identical lambdas and reduce template instantiations? This was one of the problems with moving from string lambdas to actual lambdas.
There was some limited support for lambda comparison added in 2.079: https://dlang.org/changelog/2.079.0.html#lambdacomp
OK, so this is possible in some circumstances. A good update to dmd would be to identify such "comparable" lambdas, and merge their template instantiations into one. -Steve
Sep 23 2020
parent Stefan Koch <uplink.coder googlemail.com> writes:
On Wednesday, 23 September 2020 at 14:32:46 UTC, Steven 
Schveighoffer wrote:

 OK, so this is possible in some circumstances. A good update to 
 dmd would be to identify such "comparable" lambdas, and merge 
 their template instantiations into one.

 -Steve
Only if you can keep the complexity out of the template system/general semantics. And I doubt that you could.
Sep 23 2020
prev sibling parent reply Stefan Koch <uplink.coder googlemail.com> writes:
On Wednesday, 23 September 2020 at 14:05:50 UTC, Steven 
Schveighoffer wrote:

 It makes me wonder -- can the compiler determine identical 
 lambdas and reduce template instantiations? This was one of the 
 problems with moving from string lambdas to actual lambdas.
In general no. Even for string lambda's that'd be wrong. We have context dependent tokens ...
Sep 23 2020
parent Steven Schveighoffer <schveiguy gmail.com> writes:
On 9/23/20 10:17 AM, Stefan Koch wrote:
 On Wednesday, 23 September 2020 at 14:05:50 UTC, Steven Schveighoffer 
 wrote:
 
 It makes me wonder -- can the compiler determine identical lambdas and 
 reduce template instantiations? This was one of the problems with 
 moving from string lambdas to actual lambdas.
In general no. Even for string lambda's that'd be wrong. We have context dependent tokens ...
The lambda I wrote has no context dependent tokens. String lambdas were instantiated in one place, so there were not problems with context (as long as the string was the same). -Steve
Sep 23 2020
prev sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 9/23/20 5:46 AM, Stefan Koch wrote:
 struct DummyType {} // just a dummy to get the right inference
 
 auto getUDAs(alias T) { return __traits(getAttributes, T); } // needed 
 because that's the shortest thing that'll return alias[]
 
 alias alias_array = typeof(getUDAs(DummyType));
 
 auto static_map_tf(alias F)(alias_array types ...)
 {
      typeof(F(DummyType))[] result;
      result.length = types.length;
 
      foreach(i, t;types)
      {
          result[i] = F(t);
      }
 
      return result;
 }
 
 size_t sizeOf(alias t)
 {
      return t.sizeof;
 }
 
 alias Int = int;
 alias Ushort = ushort; // we need these aliases because the parser won't 
 allow us to call a function with a reserved type identifier.
 
 static assert(static_map_tf!sizeOf(Int, Ushort) == [4, 2]);
Was looking at this example thinking, if only we had an object available for each type. And then it struck me - we already do. It's typeid. For each type T, typeid(T) yields an object (of type class TypeInfo) that can be copied, compared etc. So the example could be written with typeid as such: TypeInfo[] static_map_tf(alias F)(TypeInfo[] types...) { typeof(F(types[0]))[] result; result.length = types.length; foreach(i, t; types) { result[i] = F(t); } return result; } size_t sizeOf(TypeInfo t) { return mixin("(" ~ t.toString ~ ").sizeof"); } static assert(static_map_tf!sizeOf(typeid(int), typeid(ushort)) == [4, 2]); A few comments: * Currently mixin closes the circle by taking back the typeid to the type it started from. It would be nice to have something better, e.g. t.Type would just be the type. * In the current implementation values returned by typeid() cannot be read during compilation. So one question is if they could. Per https://github.com/dlang/druntime/pull/3174, that can be done largely at the library level. * This could work with no change to the language definition. All that's needed to get lift is make TypeInfo values usable during compilation.
Sep 23 2020
next sibling parent Stefan Koch <uplink.coder googlemail.com> writes:
On Thursday, 24 September 2020 at 05:13:49 UTC, Andrei 
Alexandrescu wrote:
 On 9/23/20 5:46 AM, Stefan Koch wrote:
 struct DummyType {} // just a dummy to get the right inference
 
 auto getUDAs(alias T) { return __traits(getAttributes, T); } 
 // needed because that's the shortest thing that'll return 
 alias[]
 
 alias alias_array = typeof(getUDAs(DummyType));
 
 auto static_map_tf(alias F)(alias_array types ...)
 {
      typeof(F(DummyType))[] result;
      result.length = types.length;
 
      foreach(i, t;types)
      {
          result[i] = F(t);
      }
 
      return result;
 }
 
 size_t sizeOf(alias t)
 {
      return t.sizeof;
 }
 
 alias Int = int;
 alias Ushort = ushort; // we need these aliases because the 
 parser won't allow us to call a function with a reserved type 
 identifier.
 
 static assert(static_map_tf!sizeOf(Int, Ushort) == [4, 2]);
Was looking at this example thinking, if only we had an object available for each type. And then it struck me - we already do. It's typeid. For each type T, typeid(T) yields an object (of type class TypeInfo) that can be copied, compared etc.
Well yeah kindof. What the type function does when it creates the alias from the type is indeed similar to what a complete type info would do. However what you don't get is the nice syntax that come with type functions. And you don't get the back-channel from type-info to type. Getting typeinfo to properly fit in here, will likely mean internal compiler adjustments on the same scale as type functions. There are also uses such as the type function for fullyQualifiedName which can't be done by converting to typeinfo objects.
Sep 23 2020
prev sibling next sibling parent Stefan Koch <uplink.coder googlemail.com> writes:
wait ....

On Thursday, 24 September 2020 at 05:13:49 UTC, Andrei 
Alexandrescu wrote:
 size_t sizeOf(TypeInfo t)
 {
     return mixin("(" ~ t.toString ~ ").sizeof");
 }
What the .... The sizeof function you wrote is POLYMORPHIC! it changes shape. This would work in a DYNAMIC langauge. but it DOES NOT work in a STATIC langauge. For your approach to work sizeof needs to be a part of the typeinfo object.
Sep 23 2020
prev sibling next sibling parent reply "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Thu, Sep 24, 2020 at 01:13:49AM -0400, Andrei Alexandrescu via Digitalmars-d
wrote:
[...]
 Was looking at this example thinking, if only we had an object
 available for each type. And then it struck me - we already do. It's
 typeid. For each type T, typeid(T) yields an object (of type class
 TypeInfo) that can be copied, compared etc.
+1. I've mentioned this before in one of Stefan's threads. Basically, to promote types to first-class citizen status, all we need to do is to treat typeid specially at compile-time. At compile-time, typeid becomes something like a glorified alias to the type itself, such that it can be manipulated, passed around, interrogated, etc.. At runtime, typeid becomes the familiar RTTI object. [...]
 * This could work with no change to the language definition. All
 that's needed to get lift is make TypeInfo values usable during
 compilation.
Or treat typeid specially at compile-time, as a compile-time object not necessarily tied to TypeInfo (but "collapses" into TypeInfo at runtime). T -- There are 10 kinds of people in the world: those who can count in binary, and those who can't.
Sep 23 2020
next sibling parent Stefan Koch <uplink.coder googlemail.com> writes:
On Thursday, 24 September 2020 at 05:51:30 UTC, H. S. Teoh wrote:
I've mentioned this before in one of Stefan's threads.
 Basically, to promote types to first-class citizen status, all 
 we need to do is to treat typeid specially at compile-time.  At 
 compile-time, typeid becomes something like a glorified alias 
 to the type itself, such that it can be manipulated, passed 
 around, interrogated, etc..  At runtime, typeid becomes the 
 familiar RTTI object.
Whether you call it `alias` or `typeid` you need the same functionality. Infact type functions work in that way. Just that type functions have some more functionality to take care of more commonly needed meta-programming exercises. I don't see where the difference is other than a syntactic one.
Sep 23 2020
prev sibling parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.com> writes:
On 9/24/20 1:51 AM, H. S. Teoh wrote:
 On Thu, Sep 24, 2020 at 01:13:49AM -0400, Andrei Alexandrescu via
Digitalmars-d wrote:
 [...]
 Was looking at this example thinking, if only we had an object
 available for each type. And then it struck me - we already do. It's
 typeid. For each type T, typeid(T) yields an object (of type class
 TypeInfo) that can be copied, compared etc.
+1. I've mentioned this before in one of Stefan's threads. Basically, to promote types to first-class citizen status, all we need to do is to treat typeid specially at compile-time. At compile-time, typeid becomes something like a glorified alias to the type itself, such that it can be manipulated, passed around, interrogated, etc.. At runtime, typeid becomes the familiar RTTI object.
Indeed you did! https://forum.dlang.org/post/mailman.5271.1597852264.31109.digitalmars-d puremagic.com
 [...]
 * This could work with no change to the language definition. All
 that's needed to get lift is make TypeInfo values usable during
 compilation.
Or treat typeid specially at compile-time, as a compile-time object not necessarily tied to TypeInfo (but "collapses" into TypeInfo at runtime).
Looks like we have something here!
Sep 24 2020
prev sibling next sibling parent reply Stefan Koch <uplink.coder googlemail.com> writes:
On Thursday, 24 September 2020 at 05:13:49 UTC, Andrei 
Alexandrescu wrote:
 * Currently mixin closes the circle by taking back the typeid 
 to the type it started from. It would be nice to have something 
 better, e.g. t.Type would just be the type.
This would make typeid polymorphic in other words a template.
 * In the current implementation values returned by typeid() 
 cannot be read during compilation.
Wrong. parts of typeid can be read by ctfe. I extended this functionality myself.
 So one question is if they could. Per 
 https://github.com/dlang/druntime/pull/3174, that can be done 
 largely at the library level.
Perhaps but I don't see how to do that without introducing polymorphic types (meaning template types)
 * This could work with no change to the language definition. 
 All that's needed to get lift is make TypeInfo values usable 
 during compilation.
Wrong. The polymorphic nature you need to get back to the type from typeid would be a massive change.
Sep 23 2020
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.com> writes:
On 9/24/20 2:49 AM, Stefan Koch wrote:
 On Thursday, 24 September 2020 at 05:13:49 UTC, Andrei Alexandrescu wrote:
 * Currently mixin closes the circle by taking back the typeid to the 
 type it started from. It would be nice to have something better, e.g. 
 t.Type would just be the type.
This would make typeid polymorphic in other words a template.
 * In the current implementation values returned by typeid() cannot be 
 read during compilation.
Wrong. parts of typeid can be read by ctfe. I extended this functionality myself.
Great. Can you please elaborate a little? What bits work and what bits don't? Thanks!
Sep 24 2020
parent reply Stefan Koch <uplink.coder googlemail.com> writes:
On Thursday, 24 September 2020 at 14:07:17 UTC, Andrei 
Alexandrescu wrote:
 On 9/24/20 2:49 AM, Stefan Koch wrote:
 On Thursday, 24 September 2020 at 05:13:49 UTC, Andrei 
 Alexandrescu wrote:
 * Currently mixin closes the circle by taking back the typeid 
 to the type it started from. It would be nice to have 
 something better, e.g. t.Type would just be the type.
This would make typeid polymorphic in other words a template.
 * In the current implementation values returned by typeid() 
 cannot be read during compilation.
Wrong. parts of typeid can be read by ctfe. I extended this functionality myself.
Great. Can you please elaborate a little? What bits work and what bits don't? Thanks!
classinfo.name works. As well as equality between typeid objects. to make everything work at ctfe you would have to duplicate the whole typeinfo generation code. At least naively that's what you would have to do.
Sep 24 2020
parent reply Stefan Koch <uplink.coder googlemail.com> writes:
On Thursday, 24 September 2020 at 14:25:37 UTC, Stefan Koch wrote:
 to make everything of typeid work at ctfe you would have to 
 duplicate the whole typeinfo generation code. At least naively 
 that's what you would have to do.
Let me show you the code that does only typeid.name else if (ex.op == TOK.typeid_) { if (v.ident == Identifier.idPool("name")) { if (auto t = isType(ex.isTypeidExp().obj)) { auto sym = t.toDsymbol(null); if (auto ident = (sym ? sym.ident : null)) { result = new StringExp(e.loc, ident.toString()); result.expressionSemantic(null); return ; } } } return notImplementedYet(); } You would need at least 5-10 lines for every member of typeinfo.
Sep 24 2020
parent reply Adam D. Ruppe <destructionator gmail.com> writes:
On Thursday, 24 September 2020 at 14:36:45 UTC, Stefan Koch wrote:
 Let me show you the code that does only typeid.name
What if TypeInfo was templated instead of magically generated? So the compiler doesn't need to do all this stuff.
Sep 24 2020
parent reply Stefan Koch <uplink.coder googlemail.com> writes:
On Thursday, 24 September 2020 at 15:54:55 UTC, Adam D. Ruppe 
wrote:
 On Thursday, 24 September 2020 at 14:36:45 UTC, Stefan Koch 
 wrote:
 Let me show you the code that does only typeid.name
What if TypeInfo was templated instead of magically generated? So the compiler doesn't need to do all this stuff.
You still have to instantiate all those templates and do it correctly. Which means the compiler needs some way of exposing all the information via the templates system. Which essentially requires the same code.
Sep 24 2020
next sibling parent reply Stefan Koch <uplink.coder googlemail.com> writes:
On Thursday, 24 September 2020 at 15:57:03 UTC, Stefan Koch wrote:
 On Thursday, 24 September 2020 at 15:54:55 UTC, Adam D. Ruppe 
 wrote:
 On Thursday, 24 September 2020 at 14:36:45 UTC, Stefan Koch 
 wrote:
 Let me show you the code that does only typeid.name
What if TypeInfo was templated instead of magically generated? So the compiler doesn't need to do all this stuff.
You still have to instantiate all those templates and do it correctly. Which means the compiler needs some way of exposing all the information via the templates system. Which essentially requires the same code.
Oh and typeinfo has to be generated eagerly! Whereas the current way it works with ctfe is lazy. Which means as long as nothing else requires typeinfo, using it at CTFE does not require you to generate it. So code that only uses type info at ctfe does compile with -betterC.
Sep 24 2020
next sibling parent reply Bruce Carneal <bcarneal gmail.com> writes:
On Thursday, 24 September 2020 at 15:59:51 UTC, Stefan Koch wrote
On Thursday, 24 September 2020 at 15:59:51 UTC, Stefan Koch wrote:
 Oh and typeinfo has to be generated eagerly!
 Whereas the current way it works with ctfe is lazy.
 Which means as long as nothing else requires typeinfo, using it 
 at CTFE does not require you to generate it.
 So code that only uses type info at ctfe does compile with 
 -betterC.
So, to confirm: 1) type functions, as proposed, will work with -betterC? and, related 2) type functions leave no footprint in .o files?
Sep 24 2020
parent Stefan Koch <uplink.coder googlemail.com> writes:
On Thursday, 24 September 2020 at 16:18:22 UTC, Bruce Carneal 
wrote:
 On Thursday, 24 September 2020 at 15:59:51 UTC, Stefan Koch 
 wrote
 On Thursday, 24 September 2020 at 15:59:51 UTC, Stefan Koch 
 wrote:
 Oh and typeinfo has to be generated eagerly!
 Whereas the current way it works with ctfe is lazy.
 Which means as long as nothing else requires typeinfo, using 
 it at CTFE does not require you to generate it.
 So code that only uses type info at ctfe does compile with 
 -betterC.
So, to confirm: 1) type functions, as proposed, will work with -betterC? and, related 2) type functions leave no footprint in .o files?
1 and 2 confirmed!
Sep 24 2020
prev sibling next sibling parent reply Adam D. Ruppe <destructionator gmail.com> writes:
On Thursday, 24 September 2020 at 15:59:51 UTC, Stefan Koch wrote:
 Oh and typeinfo has to be generated eagerly!
 Whereas the current way it works with ctfe is lazy.
Eh, if it is unreferenced except at CTFE the linker can strip it from the binary (the ctfe functions of course would have to be marked as discardable/ctfe-only) I'll try to make a test in library later when I'm free. I'm barely able to type these emails and chat messages while holding the baby, lol two-handed typing is more important to programming than I like to admit.
Sep 24 2020
parent Andre Pany <andre s-e-a-p.de> writes:
On Thursday, 24 September 2020 at 16:19:34 UTC, Adam D. Ruppe 
wrote:
 On Thursday, 24 September 2020 at 15:59:51 UTC, Stefan Koch 
 wrote:
 Oh and typeinfo has to be generated eagerly!
 Whereas the current way it works with ctfe is lazy.
Eh, if it is unreferenced except at CTFE the linker can strip it from the binary (the ctfe functions of course would have to be marked as discardable/ctfe-only) I'll try to make a test in library later when I'm free. I'm barely able to type these emails and chat messages while holding the baby, lol two-handed typing is more important to programming than I like to admit.
Congratulations to your new family member Adam. Also here holding little child while writing :) Kind regards Andre
Sep 24 2020
prev sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.com> writes:
On 9/24/20 11:59 AM, Stefan Koch wrote:
 On Thursday, 24 September 2020 at 15:57:03 UTC, Stefan Koch wrote:
 On Thursday, 24 September 2020 at 15:54:55 UTC, Adam D. Ruppe wrote:
 On Thursday, 24 September 2020 at 14:36:45 UTC, Stefan Koch wrote:
 Let me show you the code that does only typeid.name
What if TypeInfo was templated instead of magically generated? So the compiler doesn't need to do all this stuff.
You still have to instantiate all those templates and do it correctly. Which means the compiler needs some way of exposing all the information via the templates system. Which essentially requires the same code.
Oh and typeinfo has to be generated eagerly! Whereas the current way it works with ctfe is lazy. Which means as long as nothing else requires typeinfo, using it at CTFE does not require you to generate it. So code that only uses type info at ctfe does compile with -betterC.
Actually it works to generate typeinfo objects lazily, on demand. I was quite surprised about how nicely that works as I was working on https://github.com/dlang/druntime/pull/3174/files. I sketched something here: https://run.dlang.io/is/PjRo9w The code compiles and evaluates correctly during compilation. The URL shortener seems to have problems so I'll also paste the code below: // Base class for CT and RT type information. immutable abstract class NewTypeInfo { size_t tsize(); } // Implementation for given type T. immutable class NewTypeInfoImpl(T) : NewTypeInfo { override size_t tsize() { return T.sizeof; } } // Use __typeid!T to get a singleton object associated with type T. property immutable(NewTypeInfo) __typeid(T)() { static immutable singleton = new NewTypeInfoImpl!T; return singleton; } auto static_map_tf(alias F)(immutable NewTypeInfo[] types...) { typeof(F(types[0]))[] result; result.length = types.length; foreach(i, t; types) { result[i] = F(t); } return result; } static assert(static_map_tf!(t => t.tsize)(__typeid!int, __typeid!ushort) == [4, 2]); void main() {}
Sep 24 2020
next sibling parent reply Stefan Koch <uplink.coder googlemail.com> writes:
On Thursday, 24 September 2020 at 16:33:37 UTC, Andrei 
Alexandrescu wrote:
 On 9/24/20 11:59 AM, Stefan Koch wrote:
 On Thursday, 24 September 2020 at 15:57:03 UTC, Stefan Koch 
 wrote:
 On Thursday, 24 September 2020 at 15:54:55 UTC, Adam D. Ruppe 
 wrote:
 On Thursday, 24 September 2020 at 14:36:45 UTC, Stefan Koch 
 wrote:
 Let me show you the code that does only typeid.name
What if TypeInfo was templated instead of magically generated? So the compiler doesn't need to do all this stuff.
You still have to instantiate all those templates and do it correctly. Which means the compiler needs some way of exposing all the information via the templates system. Which essentially requires the same code.
Oh and typeinfo has to be generated eagerly! Whereas the current way it works with ctfe is lazy. Which means as long as nothing else requires typeinfo, using it at CTFE does not require you to generate it. So code that only uses type info at ctfe does compile with -betterC.
You still have the problem of spamming the executable with those type-info objects. And you have the problem of having to eagerly generate the entire object. Rather than just having the partial information appear which is actually required for the evaluation.
Sep 24 2020
parent reply Adam D. Ruppe <destructionator gmail.com> writes:
Andrei beat me to the test implementation, it is good it kinda 
works.

On Thursday, 24 September 2020 at 16:40:21 UTC, Stefan Koch wrote:
 You still have the problem of spamming the executable with 
 those type-info objects.
linker can strip that.
 And you have the problem of having to eagerly generate the 
 entire object.
Yeah, this is a potential major limitation: you can only use the data available there. But you could define your own classes that provide new data (can we have CTFE only classes as well as functions?!) as well as limit data you don't need. Would be a new templated class instance per type which MIGHT runaway but not too bad. But like you said in the other message the other issue is getting the original type T back out of the polymorphic typeinfo interface... if you need it. But maybe you could still have one template returning that and just using typeinfo for its internal intermediate calculations. (TypeInfo -> T can be done for a limited set by linear search. You cant pass a class instance as a template argument (alas... that restriction could *perhaps* be lifted in general tho), but one of the virtual methods could return a numeric id or string that is an index into a limited list of options. ``` class MyTypeInfo { immutable this(string name) { this.name = name; } string name;} template mytypeid(T) { static immutable MyTypeInfo store = new immutable MyTypeInfo(T.stringof); immutable(MyTypeInfo) mytypeid() { return store; } } template Tof(string name) { alias Tof = mixin(name); } immutable(MyTypeInfo) getTheString(immutable MyTypeInfo a, immutable MyTypeInfo b) { if(a is mytypeid!string) return a; return b; } void main() { Tof!(mytypeid!(int).name) i; static assert(is(typeof(i) == int)); i = 4; Tof!(getTheString(mytypeid!int, mytypeid!string).name) s; static assert(is(typeof(s) == string)); } ``` Obviously the stringof and mixin stuff does NOT work in general, as I talk about every chance I get (DON'T USE STRINGOF!!!!!), but the general approach here is plausible... at least given a limited set of options going into it.
Sep 24 2020
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 9/24/20 3:00 PM, Adam D. Ruppe wrote:
 But maybe you could still have one template returning that and just 
 using typeinfo for its internal intermediate calculations.
Yah, looking at the ilk of std.meta and std.traits it seems that the more complicated items in there would be done like this: 1. Convert the type(s) into TypeInfo(s) 2. Use everyday algorithms (including std) to carry computation 3. If needed, convert the result(s) back into TypeInfo(s) BTW, this process is akin to reification: https://en.wikipedia.org/wiki/Reification_(computer_science). For TypeInfo objects, by far the most frequent operations will be to copy them around and to compare them for equality. Then there are a few more things necessary: * add a qualifier to a type * strip a qualifier from a type * strip all qualifiers from a type * are two types equal? * is type1 implicitly convertible to type2? * if function, give me the return type and the parameter types So we'd need to add these primitives to TypeInfo. They all must be computable during compilation: class TypeInfo { ... existing stuff ... TypeInfo addQualifier(string qual); TypeInfo stripQualifier(string qualifier); TypeInfo stripAllQualifiers(); bool implicitlyConvertsTo(Typeinfo another); TypeInfo functionReturn(); // null if not a function Typeinfo[] functionParameters(); // null if not a function } This and a solution to https://issues.dlang.org/show_bug.cgi?id=9945 would impart tremendous power.
Sep 25 2020
parent reply Stefan Koch <uplink.coder gmail.com> writes:
On Friday, 25 September 2020 at 13:33:29 UTC, Andrei Alexandrescu 
wrote:

 class TypeInfo {
     ... existing stuff ...
     TypeInfo addQualifier(string qual);
     TypeInfo stripQualifier(string qualifier);
     TypeInfo stripAllQualifiers();
     bool implicitlyConvertsTo(Typeinfo another);
     TypeInfo functionReturn(); // null if not a function
     Typeinfo[] functionParameters(); // null if not a function
 }
having a string for the qualifier isn't great. I'd much prefer an enum since the qualifier is a rightly bounded set of possible values. Similarly I'd like to avoid polymorphic behavior as much as possible. (It might still have to be polymorphic to work at all, but polyorphism should be as you would say 'curtailed' )
 This and a solution to 
 https://issues.dlang.org/show_bug.cgi?id=9945 would impart 
 tremendous power.
The solution in there is polymorphic. It comes with all the problems of understandably and unpredictability that polymorphic "solutions" come with. One of the reasons why type functions can be fast and easily be reasoned about is their non concrete monomorphic interface. By removing the notion of a concrete interface the interaction with the compiler can be more efficient and direct. And it also allows us to reuse existing language concepts such as ".sizeof", ".tupleof", ".stringof" on the surface. By adding the notion of monomorphsim (alias can't convert to any other type) we achieve type safety and static checking. type functions (a misnomer really, because they can work with more than just types) are designed to occupy a functional and syntactic middle-ground between templates and functions. perhaps I should rename them to "compiletime entity introspection functions" CTEIF, but that acronym is unpronounceable or I could call them "static template emulating function and namespace" STEFAN? :D
Sep 26 2020
parent Dominikus Dittes Scherkl <dominikus scherkl.de> writes:
On Saturday, 26 September 2020 at 09:56:15 UTC, Stefan Koch wrote:
 or I could call them "static template emulating function and 
 namespace" STEFAN? :D
Yay! That's so cute...
Sep 28 2020
prev sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.com> writes:
Actually I just realized no need to rewrite the map algorithm. Given 
that we're dealing with bona fide objects, all algorithms in std work 
out of the box. This compiles and runs:

https://run.dlang.io/is/TprA9u

==========================

import std;

// Base class for CT and RT type information.
immutable abstract class NewTypeInfo {
     size_t tsize();
}

// Implementation for given type T.
immutable class NewTypeInfoImpl(T) : NewTypeInfo {
     override size_t tsize() {
         return T.sizeof;
     }
}

// Use __typeid!T to get a singleton object associated with type T.
 property immutable(NewTypeInfo) __typeid(T)() {
     static immutable singleton = new NewTypeInfoImpl!T;
     return singleton;
}

static assert([__typeid!int, __typeid!ushort].map!(t => 
t.tsize).equal([4, 2]));

void main() {}
Sep 24 2020
parent Stefan Koch <uplink.coder googlemail.com> writes:
On Thursday, 24 September 2020 at 16:41:26 UTC, Andrei 
Alexandrescu wrote:
 Actually I just realized no need to rewrite the map algorithm. 
 Given that we're dealing with bona fide objects, all algorithms 
 in std work out of the box. This compiles and runs:

 https://run.dlang.io/is/TprA9u

 ==========================

 import std;

 // Base class for CT and RT type information.
 immutable abstract class NewTypeInfo {
     size_t tsize();
 }

 // Implementation for given type T.
 immutable class NewTypeInfoImpl(T) : NewTypeInfo {
     override size_t tsize() {
         return T.sizeof;
     }
 }

 // Use __typeid!T to get a singleton object associated with 
 type T.
  property immutable(NewTypeInfo) __typeid(T)() {
     static immutable singleton = new NewTypeInfoImpl!T;
     return singleton;
 }

 static assert([__typeid!int, __typeid!ushort].map!(t => 
 t.tsize).equal([4, 2]));

 void main() {}
Now all you have to do is to be able to use the type member of the typeid template upon returning from the function. And you're at feature parity with type functions. That however requires a TOP-type container, which is alias[] :)
Sep 24 2020
prev sibling parent Adam D. Ruppe <destructionator gmail.com> writes:
On Thursday, 24 September 2020 at 15:57:03 UTC, Stefan Koch wrote:
 Which essentially requires the same code.
But only once (as trait) instead of three times (trait, RTTI, CTTI).
Sep 24 2020
prev sibling parent reply Timon Gehr <timon.gehr gmx.ch> writes:
On 24.09.20 07:13, Andrei Alexandrescu wrote:
 
 A few comments:
 
 * Currently mixin closes the circle by taking back the typeid to the 
 type it started from. It would be nice to have something better, e.g. 
 t.Type would just be the type.
https://issues.dlang.org/show_bug.cgi?id=9945
Sep 24 2020
next sibling parent reply Steven Schveighoffer <schveiguy gmail.com> writes:
On 9/24/20 6:12 AM, Timon Gehr wrote:
 On 24.09.20 07:13, Andrei Alexandrescu wrote:
 A few comments:

 * Currently mixin closes the circle by taking back the typeid to the 
 type it started from. It would be nice to have something better, e.g. 
 t.Type would just be the type.
https://issues.dlang.org/show_bug.cgi?id=9945
Hm... so I'm guessing TypeInfo.Type or __traits(typeFromId) would only work for CTFE? That sounds interesting. Could be a different way to implement type functions, without requiring a new mechanism of passing type data. And there is the potential (I think) of making functions that are only type functions for CTFE, and can do something different if used at runtime (via __ctfe). -Steve
Sep 24 2020
parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.com> writes:
On 9/24/20 8:58 AM, Steven Schveighoffer wrote:
 On 9/24/20 6:12 AM, Timon Gehr wrote:
 On 24.09.20 07:13, Andrei Alexandrescu wrote:
 A few comments:

 * Currently mixin closes the circle by taking back the typeid to the 
 type it started from. It would be nice to have something better, e.g. 
 t.Type would just be the type.
https://issues.dlang.org/show_bug.cgi?id=9945
Hm... so I'm guessing TypeInfo.Type or __traits(typeFromId) would only work for CTFE?
Affirmative - wouldn't make sense otherwise as you could create (or receive) such an object dynamically.
 That sounds interesting. Could be a different way to implement type 
 functions, without requiring a new mechanism of passing type data. And 
 there is the potential (I think) of making functions that are only type 
 functions for CTFE, and can do something different if used at runtime 
 (via __ctfe).
<nod> One nice thing it would do is unify CTTI with RTTI. There's a lot of unused potential in the run-time use of TypeInfo that I'll discuss at a later time.
Sep 24 2020
prev sibling parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.com> writes:
On 9/24/20 6:12 AM, Timon Gehr wrote:
 On 24.09.20 07:13, Andrei Alexandrescu wrote:
 A few comments:

 * Currently mixin closes the circle by taking back the typeid to the 
 type it started from. It would be nice to have something better, e.g. 
 t.Type would just be the type.
https://issues.dlang.org/show_bug.cgi?id=9945
Great, thanks!
Sep 24 2020