www.digitalmars.com         C & C++   DMDScript  

digitalmars.D - type variables

reply Bruce Carneal <bcarneal gmail.com> writes:
Recent discussions in other threads, and at beerconf, have 
prompted thoughts about type variables.  Below is a snapshot of 
my thinking on the topic. Corrections/simplifications/additions 
are solicited, particularly from those who know more about this 
than I do.  (if you're in doubt, answer in the affirmative)


Type variables carry with them quite a few things.  Most 
understandably they carry structural information, how the builtin 
types and compile-time constants were combined to form this type.

They can also carry naming information: phobos Typedef, struct 
field names, method names, aliasings, scope information as to 
where/how the type was formed, ...

Current dlang type 'variables' are created in a strictly 
functional style.  IOW, they come from declarative syntax, CTFE 
constants, and possibly recursive application of templates.

Pure functional programming is great wrt correctness, it's 
working today, but it's not-so-great when it comes to 
readability/maintainability.  For starters, composition is a 
challenge.  Unwinding recursions in your head is another 
challenge.  Debugging is another challenge.

Additionally, any template recursions extend the type names in a 
very ugly way.  Yes, the extension will give you a unique 
(unintelligible ginormous) name but that's about it.  Seems that 
we should be able to get a unique mangle without the garbage, 
something that a human could read and have a prayer of 
understanding while the universe is still warm.

So, what if we had mutable type variables that were 
canonicalized/frozen/vetted by the compiler?

Mutations might initially be restricted to those that could be 
independently checked for correctness.  This could get us off the 
ground wrt composition and avoid a full (re)check when a 
correct/concrete type is needed.

Structural equivalence could be factored out from name 
equivalence, both for speed and functionality.

Type functions should become both more powerful and more readable.

I believe as Andrei apparently does, that we're still in the 
early phases of meta programming, that we need more not less meta 
capability, and that rebasing the compiler to, itself, be both 
simpler and more capable in the meta realm will pay big ongoing 
dividends.

As noted above, destruction requested.
Aug 01 2020
parent reply Paul Backus <snarwin gmail.com> writes:
On Sunday, 2 August 2020 at 00:19:46 UTC, Bruce Carneal wrote:
 Current dlang type 'variables' are created in a strictly 
 functional style.  IOW, they come from declarative syntax, CTFE 
 constants, and possibly recursive application of templates.
In D, currently, there are a few different ways you can refer to a type. 1) By its name. int x; // refers to `int` by name 2) By an alias. alias T = int; T x; // refers to `int` by the alias `T` 3) By a template parameter. template foo(T) { T x; // refers to any type by the parameter `T` } 4) By a typeof expression. typeof(42) x; // refers to `int` by the expression `typeof(42)` 5) By a __traits expression. struct Node { int data; __traits(parent, data)* next; // refers to `Node` by the expression `__traits(parent, data)` } 6) By a string mixin. mixin("int") x; // refers to `int` by the string mixin `mixin("int")` aliases and template parameters, since they both involve giving an existing type a new name. Currently, these "type variables" are immutable, in the sense that once a name is given to a type, the same name cannot later be given to a new type. For example, you are not allowed to write code like this: alias T = int; T x = 42; T = string; // error: can't assign to an alias T s = "hello"; In general, this is a good thing--having `T` refer to two different types at two different points in the program makes the code harder to read (because you have to keep track of which type it refers to) and harder to modify (because you have to make sure the changes to `T` and its uses remain in the correct order relative to one another). However, in certain specific contexts, the ability to modify an alias or a template parameter can be useful. This is where proposals like type functions come in: they provide a context in which aliases like `T` can be mutated, while still leaving them immutable in "normal" D code.
 Pure functional programming is great wrt correctness, it's 
 working today, but it's not-so-great when it comes to 
 readability/maintainability.  For starters, composition is a 
 challenge.  Unwinding recursions in your head is another 
 challenge.  Debugging is another challenge.
The main issue with recursion is not that it is difficult to understand or maintain, but that it has poor performance. In order to process an argument list of length N using template recursion, N separate template instantiations are required. Using mutation instead of recursion would reduce the memory required by such templates from O(N) to O(1). This is the primary motivation behind proposals like Stefan Koch's type functions and Manu's `...` operator.
 Additionally, any template recursions extend the type names in 
 a very ugly way.  Yes, the extension will give you a unique 
 (unintelligible ginormous) name but that's about it.  Seems 
 that we should be able to get a unique mangle without the 
 garbage, something that a human could read and have a prayer of 
 understanding while the universe is still warm.
As far as I'm aware, the problem with ginormous names had nothing to do with template *recursion*. Rather, it resulted from the fact that code making heavy use of templates (for example, UFCS chains of std.algorithm and std.range functions) generated names in which the *same exact* type names were *repeated* many, many times. If you know of an example of excessively large symbol names resulting *specifically* from template recursion, I would be interested to hear about it.
Aug 01 2020
next sibling parent reply Bruce Carneal <bcarneal gmail.com> writes:
On Sunday, 2 August 2020 at 02:21:30 UTC, Paul Backus wrote:
 On Sunday, 2 August 2020 at 00:19:46 UTC, Bruce Carneal wrote:
 Current dlang type 'variables' are created in a strictly 
 functional style.  IOW, they come from declarative syntax, 
 CTFE constants, and possibly recursive application of 
 templates.
In D, currently, there are a few different ways you can refer to a type. 1) By its name. int x; // refers to `int` by name 2) By an alias.
..
 3) By a template parameter.
..
 4) By a typeof expression.
..
 5) By a __traits expression.
..
 6) By a string mixin.
..

 aliases and template parameters, since they both involve giving 
 an existing type a new name.
I was thinking more about gaining access to a mutable form which could be converted to/from concrete types as represented by the compiler under the hood rather than the current methods of creating types in the source. Note that "mixin" reduces to the others above and the others, of course, reduce to the compiler's internal form.
 Currently, these "type variables" are immutable, in the sense 
 that once a name is given to a type, the same name cannot later 
 be given to a new type. For example, you are not allowed to
...
 In general, this is a good thing--having `T` refer to two 
 different types ...
Yes, unrestricted type mutation capability is a non goal. So today we have a large and growing zoo of forms that we can utilize to work with types. A mutable class convertible to/from whatever the compiler is using "under the hood" might be a better way. It might be used to implement the zoo while limiting further special casing. Switching metaphors, we'd have less unsprung weight going forward.
 However, in certain specific contexts, the ability to modify an 
 alias or a template parameter can be useful. This is where 
 proposals like type functions come in: they provide a context 
 in which aliases like `T` can be mutated, while still leaving 
 them immutable in "normal" D code.
Yes, there are other ways to achieve better readability without exposing types as completely as I've sketched.
 Pure functional programming is great wrt correctness, it's 
 working today, but it's not-so-great when it comes to 
 readability/maintainability.  For starters, composition is a 
 challenge.  Unwinding recursions in your head is another 
 challenge.  Debugging is another challenge.
The main issue with recursion is not that it is difficult to understand or maintain, but that it has poor performance. In order to process an argument list of length N using template recursion, N separate template instantiations are required. Using mutation instead of recursion would reduce the memory required by such templates from O(N) to O(1). This is the primary motivation behind proposals like Stefan Koch's type functions and Manu's `...` operator.
Yes, I misspoke. It's the potential expanse of the template invocations that's the issue. Recursion tends to improve readability.
 Additionally, any template recursions extend the type names in 
 a very ugly way.  Yes, the extension will give you a unique 
 (unintelligible ginormous) name but that's about it.  Seems 
 that we should be able to get a unique mangle without the 
 garbage, something that a human could read and have a prayer 
 of understanding while the universe is still warm.
As far as I'm aware, the problem with ginormous names had nothing to do with template *recursion*. Rather, it resulted from the fact that code making heavy use of templates (for example, UFCS chains of std.algorithm and std.range functions) generated names in which the *same exact* type names were *repeated* many, many times. If you know of an example of excessively large symbol names resulting *specifically* from template recursion, I would be interested to hear about it.
Yes, as above I misspoke here, it's not recursion that's the issue rather it's the mechanism that is employed wrt naming that could show up when recurring (but mostly elsewhere). Factoring out naming from the "anonymous" structural aspects of types seems like a good way to go. If people want to match on type structure for some reason, great. If they want to create ginormous names, well, OK. Thanks for the response.
Aug 01 2020
next sibling parent reply Bruce Carneal <bcarneal gmail.com> writes:
On Sunday, 2 August 2020 at 04:08:00 UTC, Bruce Carneal wrote:
[snip]
 Factoring out naming from the "anonymous" structural aspects of 
 types seems like a good way to go.  If people want to match on 
 type structure for some reason, great.  If they want to create 
 ginormous names, well, OK.
Did some background reading on LLVM. The LLVM type system provides both named and unnamed literal/structural forms. Type equality checks within LLVM, both named and otherwise, appear to be non-trivial in general (saw a 2020 paper highlighting the screw-ups and workarounds). Earlier LLVM work aimed to speed things up there by reducing the cost of "uniqueing" mutable type representations (turning deep/expensive equality checks in to pointer comparisons IIUC). As a guess, DMD already does this type of thing, separating deeper commonalities from shallower differences (names, attributes, and the like). Looks like many of LLVM's more recent problems stem from C/C++ related issues. Again not a problem for DMD. Still, even concrete/lowered type representation is much less a "solved" problem than I imagined if LLVM is anything to go by. The improvements suggested by Manu and Stefan are looking pretty good.
Aug 01 2020
parent reply Stefan Koch <uplink.coder googlemail.com> writes:
On Sunday, 2 August 2020 at 06:36:44 UTC, Bruce Carneal wrote:

 Earlier LLVM work aimed to speed things up there by reducing 
 the cost of "uniqueing" mutable type representations (turning 
 deep/expensive equality checks in to pointer comparisons IIUC).
  As a guess, DMD already does this type of thing, separating 
 deeper commonalities from shallower differences (names, 
 attributes, and the like).
No, dmd uses the mangle to compare types, as the mangle has to be unique and identical for identical types.
 Looks like many of LLVM's more recent problems stem from C/C++ 
 related issues.  Again not a problem for DMD.

 Still, even concrete/lowered type representation is much less a 
 "solved" problem than I imagined if LLVM is anything to go by.
Hmm yes, everyone has their own type representation, as much as I like bashing LLVM for this they can't be faulted as they try to be a general framework.
 The improvements suggested by Manu and Stefan are looking 
 pretty good.
Thanks!
Aug 02 2020
parent reply Bruce Carneal <bcarneal gmail.com> writes:
On Sunday, 2 August 2020 at 12:16:32 UTC, Stefan Koch wrote:
 On Sunday, 2 August 2020 at 06:36:44 UTC, Bruce Carneal wrote:

 Earlier LLVM work aimed to speed things up there by reducing 
 the cost of "uniqueing" mutable type representations (turning 
 deep/expensive equality checks in to pointer comparisons IIUC).
  As a guess, DMD already does this type of thing, separating 
 deeper commonalities from shallower differences (names, 
 attributes, and the like).
No, dmd uses the mangle to compare types, as the mangle has to be unique and identical for identical types.
Thanks for this peek under the hood. I've read a little of the sdc code but have not dipped in to dmd yet. Should have done so before opining as I did.
 Looks like many of LLVM's more recent problems stem from C/C++ 
 related issues.  Again not a problem for DMD.

 Still, even concrete/lowered type representation is much less 
 a "solved" problem than I imagined if LLVM is anything to go 
 by.
Hmm yes, everyone has their own type representation, as much as I like bashing LLVM for this they can't be faulted as they try to be a general framework.
Yes. Reportedly LLVM has had problems with a mutable type representation that is "uniqueified" after the fact. IIUC, DMD simplifies this significantly by constraining the evolution of the internal type representation. That constraint appears to be so strong, strongly pure IIUC, that unique names could be provided far upstream of any deep template invocations.
 The improvements suggested by Manu and Stefan are looking 
 pretty good.
Thanks!
Aug 02 2020
parent Bruce Carneal <bcarneal gmail.com> writes:
On Sunday, 2 August 2020 at 14:42:39 UTC, Bruce Carneal wrote:
 On Sunday, 2 August 2020 at 12:16:32 UTC, Stefan Koch wrote:
 Hmm yes, everyone has their own type representation, as much 
 as I like bashing LLVM
 for this they can't be faulted as they try to be a general 
 framework.
Yes. Reportedly LLVM has had problems with a mutable type representation that is "uniqueified" after the fact. IIUC, DMD simplifies this significantly by constraining the evolution of the internal type representation. That constraint appears to be so strong, strongly pure IIUC, that unique names could be provided far upstream of any deep template invocations.
Separately, isn't structural comparison (comparison without names) important for template bloat reduction? Does DMD use names and attributes to qualify access up front but reduce to low level equivalence before handing off to the back end? Since you have to let the back end do deduplication in the LTO case my guess is that the front end doesn't waste time on it.
Aug 02 2020
prev sibling parent reply Paul Backus <snarwin gmail.com> writes:
On Sunday, 2 August 2020 at 04:08:00 UTC, Bruce Carneal wrote:
 I was thinking more about gaining access to a mutable form 
 which could be converted to/from concrete types as represented 
 by the compiler under the hood rather than the current methods 
 of creating types in the source.  Note that "mixin" reduces to 
 the others above and the others, of course, reduce to the 
 compiler's internal form.

 ...

 Yes, unrestricted type mutation capability is a non goal. So 
 today we have a large and growing zoo of forms that we can 
 utilize to work with types.  A mutable class convertible 
 to/from whatever the compiler is using "under the hood" might 
 be a better way.  It might be used to implement the zoo while 
 limiting further special casing.  Switching metaphors, we'd 
 have less unsprung weight going forward.
Correct me if I'm wrong, but it sounds to me like what you have in mind is something like this: // type function alias Tuple(Ts...) { TypeBuilder b; b.kind = Kind.struct; foreach (T; Ts) b.addMember(T, ""); // anonymous member return b.toType; } That is, we have some mutable representation of a type, which we can manipulate via some compiler-defined API, and once we're done, we convert the result to a "real", immutable type, which can be used in other parts of the program. I can see how this sort of thing might be useful. Indeed, if you generalize this design from just types to *all* kinds of AST nodes (FunctionBuilder, ExpressionBuilder, etc.), what you end up with is essentially a procedural macro system. The problem is that D already has features for performing these kinds of AST manipulations: static if, static foreach, and mixins. From a language-design perspective, adding new features that duplicate the functionality of existing ones is generally not a good idea. Rather that attempt to *replace* D's existing metaprogramming features with something entirely new, I think it would be much better to *extend* them with language features that allow us to overcome their current limitations.
 Factoring out naming from the "anonymous" structural aspects of 
 types seems like a good way to go.  If people want to match on 
 type structure for some reason, great.  If they want to create 
 ginormous names, well, OK.
I don't understand what structural vs. nominal typing has to do with the rest of your post.
Aug 02 2020
parent Bruce Carneal <bcarneal gmail.com> writes:
On Sunday, 2 August 2020 at 16:38:30 UTC, Paul Backus wrote:
 On Sunday, 2 August 2020 at 04:08:00 UTC, Bruce Carneal wrote:
 I was thinking more about gaining access to a mutable form 
 which could be converted to/from concrete types as represented 
 by the compiler under the hood rather than the current methods 
 of creating types in the source.  Note that "mixin" reduces to 
 the others above and the others, of course, reduce to the 
 compiler's internal form.

 ...

 Yes, unrestricted type mutation capability is a non goal. So 
 today we have a large and growing zoo of forms that we can 
 utilize to work with types.  A mutable class convertible 
 to/from whatever the compiler is using "under the hood" might 
 be a better way.  It might be used to implement the zoo while 
 limiting further special casing.  Switching metaphors, we'd 
 have less unsprung weight going forward.
Correct me if I'm wrong, but it sounds to me like what you have in mind is something like this: // type function alias Tuple(Ts...) { TypeBuilder b; b.kind = Kind.struct; foreach (T; Ts) b.addMember(T, ""); // anonymous member return b.toType; } That is, we have some mutable representation of a type, which we can manipulate via some compiler-defined API, and once we're done, we convert the result to a "real", immutable type, which can be used in other parts of the program.
Yes. That was the idea. As sketched the "mutable" form would initially only allow locally verifiable mutations IOW an attempted mutation would either error out or leave you with a valid type. After thinking about it a little more, and reading about LLVM's woes wrt a more generally mutable type representation, I'd say such mutation constraints would be very useful if not essential long term.
 I can see how this sort of thing might be useful. Indeed, if 
 you generalize this design from just types to *all* kinds of 
 AST nodes (FunctionBuilder, ExpressionBuilder, etc.), what you 
 end up with is essentially a procedural macro system.

 The problem is that D already has features for performing these 
 kinds of AST manipulations: static if, static foreach, and 
 mixins. From a language-design perspective, adding new features 
 that duplicate the functionality of existing ones is generally 
 not a good idea.

 Rather that attempt to *replace* D's existing metaprogramming 
 features with something entirely new, I think it would be much 
 better to *extend* them with language features that allow us to 
 overcome their current limitations.
I think that the metric to use here is capability/complexity going forward. Legacy "dead weight" is an issue but hopefully the new capabilities can help us there, in a big way. For me, "complexity" equates almost perfectly to readability. If performant code written using a new capability isn't more readable, more directly comprehensible, then it's a no-go. If a new capability doesn't admit a performant readable implementation it's also a no-go.
 Factoring out naming from the "anonymous" structural aspects 
 of types seems like a good way to go.  If people want to match 
 on type structure for some reason, great.  If they want to 
 create ginormous names, well, OK.
I don't understand what structural vs. nominal typing has to do with the rest of your post.
It was just in the context of "if we're introducing this new form, what all should it enable?". There are some times when you care which names refer to an underlying "anonymous" type and other times when working with the factored form is the focus. Attributes and linkage directives might also be factored explicitly but I think those would better be handled using filter/set operations. Obviously I'm winging it here so I appreciate the feedback. Better/cheaper to weed out bad ideas early.
Aug 02 2020
prev sibling parent reply Stefan Koch <uplink.coder googlemail.com> writes:
On Sunday, 2 August 2020 at 02:21:30 UTC, Paul Backus wrote:

 Pure functional programming is great wrt correctness, it's 
 working today, but it's not-so-great when it comes to 
 readability/maintainability.  For starters, composition is a 
 challenge.  Unwinding recursions in your head is another 
 challenge.  Debugging is another challenge.
The main issue with recursion is not that it is difficult to understand or maintain, but that it has poor performance. In order to process an argument list of length N using template recursion, N separate template instantiations are required. Using mutation instead of recursion would reduce the memory required by such templates from O(N) to O(1). This is the primary motivation behind proposals like Stefan Koch's type functions and Manu's `...` operator.
While performance is indeed the original motivation, I come to appreciate being able to use imperative style in type operations a lot. It's always good to have the choice. Take this for example. int[] iota(int start, int finish) { int[] result = []; result.length = finish - start; foreach(i;start .. finish) { result[i - start] = i; } return result; } Whereas the recursive function for this, is something I do not even want to put on here. Because it's way to complicated for this simple task.
 Additionally, any template recursions extend the type names in 
 a very ugly way.  Yes, the extension will give you a unique 
 (unintelligible ginormous) name but that's about it.  Seems 
 that we should be able to get a unique mangle without the 
 garbage, something that a human could read and have a prayer 
 of understanding while the universe is still warm.
As far as I'm aware, the problem with ginormous names had nothing to do with template *recursion*. Rather, it resulted from the fact that code making heavy use of templates (for example, UFCS chains of std.algorithm and std.range functions) generated names in which the *same exact* type names were *repeated* many, many times. If you know of an example of excessively large symbol names resulting *specifically* from template recursion, I would be interested to hear about it.
There are some examples where, even with mangle compression the mangles are still execcsively long. Large string swtiches are the easy demonstaration here. Those are not due to recusion but recursion certainly does not help. Because the intermediate templates are created explicitly and therefore can't be optimised away in the general(!) case.
Aug 02 2020
parent reply Paul Backus <snarwin gmail.com> writes:
On Sunday, 2 August 2020 at 12:31:00 UTC, Stefan Koch wrote:
 Take this for example.
 int[] iota(int start, int finish)
 {
     int[] result = [];
     result.length = finish - start;
     foreach(i;start .. finish)
     {
         result[i - start] = i;
     }
     return result;
 }

 Whereas the recursive function for this,
 is something I do not even want to put on here.
 Because it's way to complicated for this simple task.
In fact, the naive recursive version of iota is actually even simpler: int[] iota(int start, int finish) { if (start > finish) return []; else return [start] ~ iota(start + 1, finish); } Again, the problem with this function is not complexity or readability, but performance. It performs many unnecessary memory allocations, and consumes (finish - start) stack frames instead of 1.
Aug 02 2020
next sibling parent Bruce Carneal <bcarneal gmail.com> writes:
On Sunday, 2 August 2020 at 15:04:07 UTC, Paul Backus wrote:
 On Sunday, 2 August 2020 at 12:31:00 UTC, Stefan Koch wrote:
 Take this for example.
 int[] iota(int start, int finish)
 {
     int[] result = [];
     result.length = finish - start;
     foreach(i;start .. finish)
     {
         result[i - start] = i;
     }
     return result;
 }

 Whereas the recursive function for this,
 is something I do not even want to put on here.
 Because it's way to complicated for this simple task.
In fact, the naive recursive version of iota is actually even simpler: int[] iota(int start, int finish) { if (start > finish) return []; else return [start] ~ iota(start + 1, finish); } Again, the problem with this function is not complexity or readability, but performance. It performs many unnecessary memory allocations, and consumes (finish - start) stack frames instead of 1.
Yes. Sorry to have misspoken wrt recursion early on in the thread. As you note here recursion can be, and often is, easy on the eyes. And, as you also note, that seductive simplicity sometimes masks a not-insignificant price.
Aug 02 2020
prev sibling parent reply Stefan Koch <uplink.coder googlemail.com> writes:
On Sunday, 2 August 2020 at 15:04:07 UTC, Paul Backus wrote:
 On Sunday, 2 August 2020 at 12:31:00 UTC, Stefan Koch wrote:
 Take this for example.
 int[] iota(int start, int finish)
 {
     int[] result = [];
     result.length = finish - start;
     foreach(i;start .. finish)
     {
         result[i - start] = i;
     }
     return result;
 }

 Whereas the recursive function for this,
 is something I do not even want to put on here.
 Because it's way to complicated for this simple task.
In fact, the naive recursive version of iota is actually even simpler: int[] iota(int start, int finish) { if (start > finish) return []; else return [start] ~ iota(start + 1, finish); } Again, the problem with this function is not complexity or readability, but performance. It performs many unnecessary memory allocations, and consumes (finish - start) stack frames instead of 1.
How is that easier? It requires you to have a stack, which also means that you have to keep track of the stack frames in order to execute this in your head. this particular function allocates N literals array + and does N concatenations. Which when naively implemented consumes N*N + 2N words of memory.
Aug 02 2020
parent reply Paul Backus <snarwin gmail.com> writes:
On Sunday, 2 August 2020 at 17:27:43 UTC, Stefan Koch wrote:
 How is that easier?

 It requires you to have a stack, which also means that you have 
 to keep track of the stack frames in order to execute this in 
 your head.
To some extent this is a matter of experience and personal taste. If you are comfortable reading recursive code, it is quite simple to understand a function with a single base case and a single recursive case. There is no need to keep track of individual stack frames in your head. Perhaps my experience in this regard is different from the rest of the D community? I learned about recursion in my first year of undergraduate education, so I assumed this sort of thing would be considered common knowledge among experienced programmers.
Aug 02 2020
parent reply Bruce Carneal <bcarneal gmail.com> writes:
On Sunday, 2 August 2020 at 18:21:17 UTC, Paul Backus wrote:
 On Sunday, 2 August 2020 at 17:27:43 UTC, Stefan Koch wrote:
 How is that easier?

 It requires you to have a stack, which also means that you 
 have to keep track of the stack frames in order to execute 
 this   Iin your head.
To some extent this is a matter of experience and personal taste. If you are comfortable reading recursive code, it is quite simple to understand a function with a single base case and a single recursive case. There is no need to keep track of individual stack frames in your head. Perhaps my experience in this regard is different from the rest of the D community? I learned about recursion in my first year of undergraduate education, so I assumed this sort of thing would be considered common knowledge among experienced programmers.
I think it's all about readability given the performance constraints. Your type function example earlier in this thread employs iteration and is dead simple. Not much to improve there.
Aug 02 2020
parent reply Paul Backus <snarwin gmail.com> writes:
On Sunday, 2 August 2020 at 20:27:53 UTC, Bruce Carneal wrote:
 I think it's all about readability given the performance 
 constraints.

 Your type function example earlier in this thread employs 
 iteration and is dead simple.  Not much to improve there.
You're conflating two separate issues. The existing alternative to using a TypeBuilder + iteration is `static foreach`--which is also iterative. The question you should answer, if you want to convince people that TypeBuilder (or something like it) is worth adding, is "how is this better than `static foreach` and `static if`?"
Aug 02 2020
next sibling parent Bruce Carneal <bcarneal gmail.com> writes:
On Sunday, 2 August 2020 at 20:42:35 UTC, Paul Backus wrote:
 On Sunday, 2 August 2020 at 20:27:53 UTC, Bruce Carneal wrote:
 I think it's all about readability given the performance 
 constraints.

 Your type function example earlier in this thread employs 
 iteration and is dead simple.  Not much to improve there.
You're conflating two separate issues. The existing alternative to using a TypeBuilder + iteration is `static foreach`--which is also iterative. The question you should answer, if you want to convince people that TypeBuilder (or something like it) is worth adding, is "how is this better than `static foreach` and `static if`?"
The "conflation" was my attempt to fold in the OT recursion sub-thread that I, unfortunately, engendered early on when I misspoke. How is this better than static foreach and friends? WRT types it should be more efficient, more readable, and more general. The readability and generality claims can be examined via speculative coding, additional examples to go with your inaugural type function. The performance claim verification would have to wait for a prototype but in the mean time we have informed opinion from the front-end experts. My main objective here is to raise the possibility of a broadly applicable meta programming advance that could roll up a bunch of special cases now and expand the reach of "mere mortal" metaprogrammers in the future. I'm exploring here, not crusading. If there is interest now, great. If not, well, that's information too. Something like what has been sketched in this thread may be a bridge too far. It may be much better suited to an sdc revival or some other front-end development effort. It may need to go in to the D3 basket. It may follow many other ideas in to the dust bin. We'll see.
Aug 02 2020
prev sibling parent reply Stefan Koch <uplink.coder googlemail.com> writes:
On Sunday, 2 August 2020 at 20:42:35 UTC, Paul Backus wrote:
 On Sunday, 2 August 2020 at 20:27:53 UTC, Bruce Carneal wrote:
 I think it's all about readability given the performance 
 constraints.

 Your type function example earlier in this thread employs 
 iteration and is dead simple.  Not much to improve there.
You're conflating two separate issues. The existing alternative to using a TypeBuilder + iteration is `static foreach`--which is also iterative. The question you should answer, if you want to convince people that TypeBuilder (or something like it) is worth adding, is "how is this better than `static foreach` and `static if`?"
static foreach and static if come with compile time performance prices to pay. it comes down to having to do semantic processing in a piece-wise fashion multiple times. The type function approach I am working on does semantic processing of invariant parts only once. Whereas the static foreach body cannot know about invariant regions, (that's a general statement (in special cases it might be possible to implement such awareness (I would not advise it however)))
Aug 02 2020
parent Bruce Carneal <bcarneal gmail.com> writes:
On Sunday, 2 August 2020 at 23:04:45 UTC, Stefan Koch wrote:
 On Sunday, 2 August 2020 at 20:42:35 UTC, Paul Backus wrote:
 On Sunday, 2 August 2020 at 20:27:53 UTC, Bruce Carneal wrote:
 I think it's all about readability given the performance 
 constraints.

 Your type function example earlier in this thread employs 
 iteration and is dead simple.  Not much to improve there.
You're conflating two separate issues. The existing alternative to using a TypeBuilder + iteration is `static foreach`--which is also iterative. The question you should answer, if you want to convince people that TypeBuilder (or something like it) is worth adding, is "how is this better than `static foreach` and `static if`?"
static foreach and static if come with compile time performance prices to pay. it comes down to having to do semantic processing in a piece-wise fashion multiple times. The type function approach I am working on does semantic processing of invariant parts only once. Whereas the static foreach body cannot know about invariant regions, (that's a general statement (in special cases it might be possible to implement such awareness (I would not advise it however)))
I'd add that working with sets of types should become very readable and efficient. Doable now but dead simple when types live as normal objects in the compile time environment. Arrays, slicing, all the good stuff. I don't see a problem with operating with these objects at runtime either. In process code gen wouldn't be available, at least not absent something like a LLVM/jit hookup, but you could play with it all in a more debuggable environment.
Aug 02 2020