## digitalmars.D - I dun a DIP, possibly the best DIP ever

• Manu (24/24) Apr 22 We have a compile time problem, and this is basically the cure.
• rikki cattermole (2/2) Apr 22 Change it to something like .unpackSequence instead of ... and I would
• Steven Schveighoffer (7/9) Apr 22 unpackSequence is a valid identifier. That would be a breaking change.
• rikki cattermole (6/19) Apr 22 Yeah.
• Manu (5/24) Apr 22 I feel it's a perfectly natural expansion of the existing meaning. I'm 3...
• Q. Schroll (28/56) May 08 You can distinguish "Types... params" from "Type[] params..." by
• Steven Schveighoffer (6/14) May 08 Ugh. I doubt it would be a breaking change, as Ts... would suffice (if
• Q. Schroll (5/9) May 08 That's what I said. As I understand the document explaining what
• Manu (8/57) May 08 Does it though?
• Q. Schroll (22/34) May 11 I didn't think it would, but when I tested it just to be sure, it
• WebFreak001 (26/28) Apr 22 I'm for some other thing than ... too, in C++ it's a quite bad
• WebFreak001 (17/45) Apr 22 Because it is basically just staticMap but with syntax from the
• Manu (36/59) Apr 22 Negative, it's the single best thing that happened to C++ in almost 2
• Steven Schveighoffer (35/64) Apr 22 It would expand to:
• Manu (19/47) Apr 22 And of course, tuples in tuples auto-flatten according to current D
• Steven Schveighoffer (13/25) Apr 22 I think this is more complicated than the MemberTup thing. By inference
• Manu (38/62) Apr 22 I only say that because;
• Steven Schveighoffer (32/83) Apr 22 OK, that is what I thought too (that is the most useful), but this needs...
• Manu (22/109) Apr 22 Yes, these are very good points. Thanks for spelling this out clearly.
• Dave Chapman (8/10) Apr 22 Since ... is not alphabetic it is hard to search for. If you do
• Steven Schveighoffer (27/53) Apr 22 Yes please! Where is the reference implementation? I want to try some
• Manu (18/67) Apr 22 The link is in the DIP.
• Steven Schveighoffer (38/80) Apr 22 Ugh, I think I won't be able to test then, most of this stuff works with...
• Manu (47/122) Apr 22 Types work now, it just depends what you do with them.
• Steven Schveighoffer (11/24) Apr 22 Like this works today:
• Steven Schveighoffer (7/14) Apr 22 This is awesome, and I'm not seeing why you would save it for later.
• Manu (13/25) Apr 22 Because it's strictly additive, DIP's with over-reach tend to fail.
• Steven Schveighoffer (11/39) Apr 22 Without the folding, I think this feature leaves a lot on the table. I'd...
• Manu (28/68) Apr 22 I think you'll discover that the map's are the more expensive operations...
• Q. Schroll (19/35) May 08 That's a good addition. Note that C++ requires parentheses around
• Stefan Koch (13/15) Apr 22 As you can see I have been busy with something similar.
• Manu (5/35) Apr 23 Stefan worked out how to make the TemplateInstance case work. The branch...
• Walter Bright (5/12) Apr 23 This can be done with the array syntax proposal using UFCS:
• Guillaume Piolat (6/10) Apr 22 YES PLEASE!
• Sebastiaan Koppe (2/5) Apr 22 Love it. Syntax and all.
• Panke (2/4) Apr 22 Thanks! Great work.
• Arine (4/6) Apr 22 This is going to be the first DIP in a long time I can actually
• WebFreak001 (32/35) Apr 23 looking at it again, I actually quite start to like it because
• Steven Schveighoffer (23/65) Apr 23 If are looking for the second item out of the tuple, your last
• Stefan Koch (6/11) Apr 23 It already is. Tuples without operations are just expanded
• WebFreak001 (12/39) Apr 23 sure it's silly but it might be something else than [2] like
• Simen =?UTF-8?B?S2rDpnLDpXM=?= (11/16) Apr 23 This is beautiful and awesome (syntax and all).
• Stefan Koch (4/17) Apr 23 doing the cartesian product of the tuples was in my first
• Steven Schveighoffer (9/26) Apr 23 You need to do part of it with templates I think. But it should be more
• Stefan Koch (7/8) Apr 23 yeah no.
• Stefan Koch (18/28) Apr 23 to make the point clearer:
• Manu (5/15) Apr 23 I don't think that's true according to the current implementation.
• Manu (6/24) Apr 23 Oh no, I'm wrong. I think we have a bug there...
• Stefan Koch (3/5) Apr 23 In the current implementation it's a parser error.
• WebFreak001 (17/24) Apr 23 I would assume that both
• Manu (22/46) Apr 23 Well, they both do nothing.
• Manu (16/30) Apr 23 You can do this by expanding tuples with the appropriate indices:
• Mafi (12/25) Apr 23 I think ...-Expressions should first expand nested
• Stefan Koch (3/16) Apr 23 that won't work.
• pineapple (19/19) Apr 23 I like this idea, but I don't like the syntax.
• Mafi (6/27) Apr 23 Well, then what about:
• Steven Schveighoffer (23/54) Apr 23 In this call, Y... does nothing, it just trivially means Y, so you
• Stefan Koch (5/9) Apr 23 As long as we don't allow tuples of tuples yes.
• Mafi (16/27) Apr 23 This is wrong under the assumption I wrote before: nested
• Walter Bright (3/4) Apr 23 Well done, Manu! This looks like a very good idea. I'm optimistic about ...
• Manu (4/9) Apr 23 =F0=9F=8E=89=F0=9F=8E=89=F0=9F=8E=89
• Paolo Invernizzi (2/6) Apr 24 That made my day a sunny day! :-P
• Walter Bright (59/60) Apr 23 Ok, I've had a chance to think about it. It's a scathingly brilliant ide...
• Manu (39/102) Apr 23 I thought about this, but this reaches much further than a op b .
• Walter Bright (26/54) Apr 23 I expect static foreach can handle that. But we can dig a little deeper....
• WebFreak001 (12/17) Apr 23 this would be a breaking change:
• Manu (17/83) Apr 23 You can imagine that the expressions could be far more elaborate than th...
• Manu (10/23) Apr 23 No, it's not necessary that they are common types. It's actually the
• Walter Bright (4/25) Apr 24 The only way:
• Manu (7/38) Apr 24 It doesn't make much sense to think in terms of primitive types. As I ju...
• Walter Bright (3/7) Apr 24 [ cast(bool)Tup ]
• Manu (5/15) Apr 24 This claim doesn't make sense. A custom type can BinOp any adjacent type...
• Walter Bright (8/28) Apr 24 I can't see the legitimate use case for:
• Steven Schveighoffer (12/14) Apr 24 This is probably good enough, because we can generate arrays at
• Timon Gehr (6/28) May 09 This is not equivalent to the current implementations.
• Steven Schveighoffer (12/42) May 09 I'm not sure I'd call that a "feature" though, or just invalid input:
• Nick Treleaven (24/27) May 10 That might be slower than the existing templates (now in
• Stefan Koch (11/38) May 10 Here is how this would look as a type function.
• Steven Schveighoffer (12/39) May 10 I admit, I didn't look at the implementation, I just assumed it was one
• Simen =?UTF-8?B?S2rDpnLDpXM=?= (4/9) Apr 24 Sure they do - just think of expression templates.
• Walter Bright (9/12) Apr 24 We're not adding features to support expression templates with tuples.
• Atila Neves (4/19) Apr 27 ETs are very much alive in C++. Both Eigen and Boost.Spirit are
• jmh530 (5/9) Apr 27 Eigen is also a dependency in a number of other libraries. When
• Atila Neves (5/15) Apr 27 I know. Eigen is literally the reason why I can't convince any
• Walter Bright (6/20) Apr 27 Actually, you can write expression templates in D (I've done it as a dem...
• H. S. Teoh (32/38) Apr 27 Operator overload abuse of the kind Boost Spirit does makes me cringe.
• rikki cattermole (3/3) Apr 27 A while back on IRC we came up with an idea for first class types.
• Stefan Koch (6/10) Apr 27 I was going for a solution which leaves most of the syntax as is.
• Atila Neves (6/21) Apr 28 These limitations are one of the reasons why I'm not sure it's
• Stefan Koch (4/7) Apr 27 One of the good things about D is that Boost is not in the
• Manu (27/30) Apr 24 Your idea all falls over anywhere it encounters variadic args, or potent...
• Piotr Mitana (14/14) Apr 24 My two cents; the idea is nice, however I would consider a bit
• Walter Bright (24/41) Apr 24 Write it as:
• Walter Bright (2/18) Apr 24 Incidentally, belay that. That will currently produce: fun(0, 2, 3);
• Nick Treleaven (10/25) Apr 24 This syntax is an unfortunate inconsistency with your proposal,
• Walter Bright (6/32) Apr 24 Whether it's intuitive or not depends on your point of view. For example...
• Adam D. Ruppe (11/13) Apr 24 It is fairly important for the string interpolation DIP that's
• Manu (17/45) Apr 24 I pointed out earlier, and I'll point out again, that Walter's proposal ...
• Steven Schveighoffer (25/31) Apr 24 The point of this is to reduce the amount of "trivial" templates. Every
• Meta (7/11) Apr 24 "staticIota" *should* be a primitive, and D even supports this
• Walter Bright (20/46) Apr 24 No argument there.
• Steven Schveighoffer (40/83) Apr 24 It's not just the expressive power, but the simple expressiveness of the...
• Stefan Koch (16/27) Apr 24 AliasSeq itself is not the problem at all.
• Walter Bright (7/8) Apr 24 I did a little research:
• Manu (33/38) Apr 24 It is so predictable that you will eventually produce a sentence like th...
• Walter Bright (3/13) Apr 25 Giving this a try:
• user1234 (3/18) Apr 25 Nice to see the idea applied. Can you give numbers, i.e from
• Stefan Koch (2/17) Apr 25 This is supposed to make using staticMap cheap?
• Walter Bright (6/7) Apr 25 No. It's to make AliasSeq cheap, to remove motivation for making a speci...
• Stefan Koch (6/15) Apr 25 As I've said before: AliasSeq is not the slow part.
• Walter Bright (3/4) Apr 25 I agree. But as your benchmark showed, AliasSeq is worthwhile to do this...
• Manu (8/12) Apr 25 It's still un-fun to type AliasSeq!(), and I've never liked how it reads...
• Walter Bright (8/15) Apr 25 It was originally TypeTuple!(), and still is in druntime. (Yes, the PR
• Adam D. Ruppe (16/21) Apr 25 People used to get confused over std.typecons.Tuple and the old
• Timon Gehr (2/11) May 09 Tuple does not indicate at all what it is. Tuples usually don't auto-exp...
• Timon Gehr (14/38) May 09 That's the same thing.
• Manu (8/139) Apr 23 I guess this is the key case you need to solve for:
• Walter Bright (4/5) Apr 24 Please do not re-quote every ancestor of the thread. Just quote enough t...
• Walter Bright (3/13) Apr 24 Fair enough. Though there needs to be a rationale as to why those two pa...
• Walter Bright (13/27) Apr 24 Please keep in mind that the following works today:
• Stefan Koch (5/23) Apr 24 Because of implementation issues in static foreach that'll take
• Manu (9/38) Apr 24 static foreach is not an expression, and it's very hard to involve those
• Walter Bright (4/9) Apr 24 This is why I suggested in the "Challenge" thread that these need to be
• Stefan Koch (9/22) Apr 24 not on 64bit.
• Manu (33/47) Apr 24 That is essentially the whole thing, simmered down to it's essence. Solv...
• Sebastiaan Koppe (5/10) Apr 24 I also think we should make it explicit. Makes it far better.
• Steven Schveighoffer (7/21) Apr 24 I think there's a fundamental piece missing from the requirements, you
• rikki cattermole (2/2) Apr 24 This can resolve my complaint about syntax and I can happily but out of
• Stefan Koch (14/18) Apr 24 I honestly don't care about which syntax we chose, but I care
• Steven Schveighoffer (23/63) Apr 24 Hm... but how do you know where to expand the expression and where to st...
• Piotr Mitana (15/16) Apr 24 My two cents; the idea is nice, however I would consider a bit
• Mafi (36/67) Apr 24 There is more corner cases to consider: AliasSeq-like members and
• Walter Bright (6/7) Apr 24 The examples in the DIP are, frankly, too trivial to make a case for the...
• Steven Schveighoffer (87/96) Apr 25 I have a couple: std.meta.NoDuplicates, and std.meta.Filter
• Steven Schveighoffer (14/23) Apr 25 also note, we can obsolete std.meta.anySatisfy or std.meta.allSatisfy,
• Paul Backus (4/8) Apr 25 Looks like std.algorithm.any and std.algorithm.all can already do
• Steven Schveighoffer (5/11) Apr 26 Nice! The only thing I would say is I don't want to have a giant nest of...
• Stefan Koch (4/19) Apr 26 If you need this implementation to be extended please drop me a
• Adam D. Ruppe (21/21) May 07 I think I just had a use for this.
• Adam D. Ruppe (22/22) May 08 Another potential use for this would be writing type translation
• Manu (5/27) May 08 Yes! I use this pattern _all the time_ in C++, and it's definitely a
• Q. Schroll (34/34) May 08 There are some corner cases I'd like to have an answer to:
• Adam D. Ruppe (6/7) May 08 This is just a struct, the ... shouldn't do anything to it
• Manu (4/11) May 08 ^^^
• Timon Gehr (2/18) May 09 It implements a tuple. It's just not a weird built-in compiler "tuple".
• Q. Schroll (26/33) May 11 It surely is a struct, but it's a struct with an alias-this to a
Manu <turkeyman gmail.com> writes:
We have a compile time problem, and this is basically the cure.
Intuitively, people imagine CTFE is expensive (and it kinda is), but
really, the reason our compile times are bad is template instantiation.

This DIP single-handedly fixes compile-time issues in programs I've written
by reducing template instantiations by near-100%, in particular, the
expensive ones; recursive instantiations, usually implementing some form of
static map.

https://github.com/dlang/DIPs/pull/188

This is an RFC on a draft, but I'd like to submit it with a reference
implementation soon.

Stefan Koch has helped me with a reference implementation, which has so far
gone surprisingly smoothly, and has shown 50x improvement in compile times
in some artificial tests.
I expect much greater improvements in situations where recursive template
expansion reaches a practical threshold due to quadratic resource
consumption used by recursive expansions (junk template instantiations, and
explosive symbol name lengths).
This should also drastically reduce compiler memory consumption in
meta-programming heavy applications.

In addition to that, it's simple, terse, and reduces program logic
indirection via 'utility' template definitions, which I find improves

We should have done this a long time ago.

- Manu

Apr 22
rikki cattermole <rikki cattermole.co.nz> writes:
Change it to something like .unpackSequence instead of ... and I would
be happy.

Apr 22
Steven Schveighoffer <schveiguy gmail.com> writes:
On 4/22/20 8:19 AM, rikki cattermole wrote:
Change it to something like .unpackSequence instead of ... and I would
be happy.

unpackSequence is a valid identifier. That would be a breaking change.
Plus it would be less obvious. If anything it would have to be
unpackSequence!(expr).

Plus there is precedence with C++ for using ...

And it makes intuitive sense -- you define tuples with T...

-Steve

Apr 22
rikki cattermole <rikki cattermole.co.nz> writes:
On 23/04/2020 1:22 AM, Steven Schveighoffer wrote:
On 4/22/20 8:19 AM, rikki cattermole wrote:
Change it to something like .unpackSequence instead of ... and I would
be happy.

unpackSequence is a valid identifier. That would be a breaking change.
Plus it would be less obvious. If anything it would have to be
unpackSequence!(expr).

Plus there is precedence with C++ for using ...

And it makes intuitive sense -- you define tuples with T...

-Steve

Yeah.

But it also has precedence with type properties such as mangleof instead
of the approach C took with sizeof and friends.

Right now we use it in parameters, adding it to the argument side with a
completely different meaning ugh.

Apr 22
Manu <turkeyman gmail.com> writes:
On Wed, Apr 22, 2020 at 11:35 PM rikki cattermole via Digitalmars-d <
digitalmars-d puremagic.com> wrote:

On 23/04/2020 1:22 AM, Steven Schveighoffer wrote:
On 4/22/20 8:19 AM, rikki cattermole wrote:
Change it to something like .unpackSequence instead of ... and I would
be happy.

unpackSequence is a valid identifier. That would be a breaking change.
Plus it would be less obvious. If anything it would have to be
unpackSequence!(expr).

Plus there is precedence with C++ for using ...

And it makes intuitive sense -- you define tuples with T...

-Steve

Yeah.

But it also has precedence with type properties such as mangleof instead
of the approach C took with sizeof and friends.

Right now we use it in parameters, adding it to the argument side with a
completely different meaning ugh.

I feel it's a perfectly natural expansion of the existing meaning. I'm 300%
happy with the syntax personally, I will have a tantrum if you take it away
from me.

Apr 22
Q. Schroll <qs.il.paperinik gmail.com> writes:
On Wednesday, 22 April 2020 at 13:37:26 UTC, Manu wrote:
On Wed, Apr 22, 2020 at 11:35 PM rikki cattermole via
Digitalmars-d < digitalmars-d puremagic.com> wrote:

On 23/04/2020 1:22 AM, Steven Schveighoffer wrote:
On 4/22/20 8:19 AM, rikki cattermole wrote:
Change it to something like .unpackSequence instead of ...
and I would be happy.

unpackSequence is a valid identifier. That would be a
breaking change. Plus it would be less obvious. If anything
it would have to be unpackSequence!(expr).

Plus there is precedence with C++ for using ...

And it makes intuitive sense -- you define tuples with T...

-Steve

Yeah.

But it also has precedence with type properties such as
mangleof instead of the approach C took with sizeof and
friends.

Right now we use it in parameters, adding it to the argument
side with a completely different meaning ugh.

You can distinguish "Types... params" from "Type[] params..." by
where the dots are.
In contrast to C and C++, D has a very strong principle that
everything after the parameter (or declared variable) is not part
of the type. So in the latter case, the dots clearly don't belong
to the type.

I feel it's a perfectly natural expansion of the existing
meaning. I'm 300% happy with the syntax personally, I will have
a tantrum if you take it away from me.

The syntax is perfectly fine for me. Don't be distracted by a
single opinion. The ... token currently can only be used in (both
template and function) parameter declarations (and is
expressions where they are kind of the same as template
parameters). You're introducing it in an expression and/or
type-expression context which seems to be fine, but very
technically, it's not. There's one example where it clashes:
Type-safe variable argument functions that omit their last
parameter's name.

One thing I want to mention: Let Ts be the type sequence int,
long, double[]; then

void f(Ts[]...);

becomes ambiguous. It could mean (by the new proposal)

void f(int[] __param_0, long[] __param_1, double[][]
__param_2);

or (by the current language rules)

void f(int __param_0, long __param_1, double[] __param_2...);

I'm pointing it out, because it is theoretically a breaking
change with silent change of behavior. I doubt that any code base
uses that pattern purposefully since it is extremely awkward and
shouldn't make it through any code review.

May 08
Steven Schveighoffer <schveiguy gmail.com> writes:
On 5/8/20 1:13 PM, Q. Schroll wrote:

or (by the current language rules)

void f(int __param_0, long __param_1, double[] __param_2...);

I'm pointing it out, because it is theoretically a breaking change with
silent change of behavior. I doubt that any code base uses that pattern
purposefully since it is extremely awkward and shouldn't make it through
any code review.

Ugh. I doubt it would be a breaking change, as Ts... would suffice (if
you really wanted to "abuse" that feature). As in, it won't break any
existing code, because nobody would do that.

I'd say it's fine for the DIP to take over that syntax.

-Steve

May 08
Q. Schroll <qs.il.paperinik gmail.com> writes:
On Friday, 8 May 2020 at 17:19:18 UTC, Steven Schveighoffer wrote:
I doubt it would be a breaking change, as Ts... would suffice
(if you really wanted to "abuse" that feature). As in, it won't
break any existing code, because nobody would do that.

That's what I said. As I understand the document explaining what
a DIP needs to address, silent change of behavior *must* be

I'd say it's fine for the DIP to take over that syntax.

Sure.

May 08
Manu <turkeyman gmail.com> writes:
On Sat, May 9, 2020 at 3:15 AM Q. Schroll via Digitalmars-d <
digitalmars-d puremagic.com> wrote:

On Wednesday, 22 April 2020 at 13:37:26 UTC, Manu wrote:
On Wed, Apr 22, 2020 at 11:35 PM rikki cattermole via
Digitalmars-d < digitalmars-d puremagic.com> wrote:

On 23/04/2020 1:22 AM, Steven Schveighoffer wrote:
On 4/22/20 8:19 AM, rikki cattermole wrote:
Change it to something like .unpackSequence instead of ...
and I would be happy.

unpackSequence is a valid identifier. That would be a
breaking change. Plus it would be less obvious. If anything
it would have to be unpackSequence!(expr).

Plus there is precedence with C++ for using ...

And it makes intuitive sense -- you define tuples with T...

-Steve

Yeah.

But it also has precedence with type properties such as
mangleof instead of the approach C took with sizeof and
friends.

Right now we use it in parameters, adding it to the argument
side with a completely different meaning ugh.

You can distinguish "Types... params" from "Type[] params..." by
where the dots are.
In contrast to C and C++, D has a very strong principle that
everything after the parameter (or declared variable) is not part
of the type. So in the latter case, the dots clearly don't belong
to the type.

I feel it's a perfectly natural expansion of the existing
meaning. I'm 300% happy with the syntax personally, I will have
a tantrum if you take it away from me.

The syntax is perfectly fine for me. Don't be distracted by a
single opinion. The ... token currently can only be used in (both
template and function) parameter declarations (and is
expressions where they are kind of the same as template
parameters). You're introducing it in an expression and/or
type-expression context which seems to be fine, but very
technically, it's not. There's one example where it clashes:
Type-safe variable argument functions that omit their last
parameter's name.

One thing I want to mention: Let Ts be the type sequence int,
long, double[]; then

void f(Ts[]...);

becomes ambiguous. It could mean (by the new proposal)

Does it though?
I don't think function declarations accept an expression in the argument
list... they are declarations.

Otherwise you'd be able to specify a function like:
void f(10 + 1, Enum.key, nullptr);

I don't think my DIP interacts with function declarations.

May 08
Q. Schroll <qs.il.paperinik gmail.com> writes:
On Saturday, 9 May 2020 at 00:25:35 UTC, Manu wrote:
One thing I want to mention: Let Ts be the type sequence int,
long, double[]; then

void f(Ts[]...);

becomes ambiguous.

Does it though?

I didn't think it would, but when I tested it just to be sure, it
actually compiled to my (and I think everyone else's) great
surprise.

Let Ts be the type sequence int, long, double[]; then the
compiler "rewrites" (for lack of a better word) the declaration

void f(Ts[]...);

to

void f(int __param_0, long __param_1, double[] __param_2...);

The dots on __param_0 and __param_1 don't do anything (as far as
I know), but make __param_2 a variadic argument. (The compiler
copies all the decorations (scope, in, out, ref, and (to my
surprise) the dots) to the type sequence for every type.)

I don't think function declarations accept an expression in the
argument list... they are declarations.

Otherwise you'd be able to specify a function like:
void f(10 + 1, Enum.key, nullptr);

If Ts contains things that don't resolve to types, the
declaration is ill-formed. That's nothing new.

I don't think my DIP interacts with function declarations.

It *theoretically* does. I have given you an example. Have a look
at it yourself https://run.dlang.io/is/PYKyx1
By the way, you can replace Ts[]... by Ts... with no
difference in the type of f.

That case is in the darkest corner of D. Just mention it in the
DIP so that no one can reject it upon potential breakage that was

May 11
WebFreak001 <d.forum webfreak.org> writes:
On Wednesday, 22 April 2020 at 12:19:25 UTC, rikki cattermole
wrote:
Change it to something like .unpackSequence instead of ... and
I would be happy.

I'm for some other thing than ... too, in C++ it's a quite bad
syntax and it can become very ugly to maintain with all the
different combinations you could use it with from a compiler
viewpoint. Additionally if you do miss some combinations or allow
more complex operators it will quickly become a mess of "why does
this compile but why doesn't this compile" or "why does this do
something different"

I'm especially not a fan of allowing all of
(Tup*10)...  -->  ( Tup[0]*10, Tup[1]*10, ... , Tup[$-1]*10 ) and (Tup1*Tup2)... and myArr[Tup + 1]... -> myArr[Tup[0] + 1], myArr[Tup[1] + 1], myArr[Tup[2] + 1] would this be valid: f!T...(x) or rather this? f!T(x)... what does (cast(Tup)x)... evaluate to? is f(Tup) and f(Tup...) now doing the same thing? if f(Items.x...) works then what about f(Items.MemberTup...) ?  Apr 22 WebFreak001 <d.forum webfreak.org> writes: On Wednesday, 22 April 2020 at 13:45:52 UTC, WebFreak001 wrote: On Wednesday, 22 April 2020 at 12:19:25 UTC, rikki cattermole wrote: Change it to something like .unpackSequence instead of ... and I would be happy. I'm for some other thing than ... too, in C++ it's a quite bad syntax and it can become very ugly to maintain with all the different combinations you could use it with from a compiler viewpoint. Additionally if you do miss some combinations or allow more complex operators it will quickly become a mess of "why does this compile but why doesn't this compile" or "why does this do something different" I'm especially not a fan of allowing all of (Tup*10)... --> ( Tup[0]*10, Tup[1]*10, ... , Tup[$-1]*10 )
and
(Tup1*Tup2)...
and
myArr[Tup + 1]... -> myArr[Tup[0] + 1], myArr[Tup[1] + 1],
myArr[Tup[2] + 1]

would this be valid:
f!T...(x)
or rather this?
f!T(x)...

what does (cast(Tup)x)... evaluate to?

is f(Tup) and f(Tup...) now doing the same thing?

if
f(Items.x...)
f(Items.MemberTup...) ?

Because it is basically just staticMap but with syntax from the

Tup->{Tup*4}

where each item in the tuple gets replaces with what is inside
the {} ; Might want to instead have some item variable name and
maybe even index counter but I can't think of some good syntax
here or of any use cases right now.

This might be too powerful though because it's basically AST
macros now if you allow this:

StringTup->{StringTup ~ } ":)"    <- like "a" ~ "b" ~ ":)"

or

Tup->{cast(Tup)}c    <- like cast(int)cast(short)c

But if you implemented this syntax in the compiler you would
manually specify all the allowed rules inside the {} block, so it
would definitely be harder to introduce weird allowed/disallowed
syntax rules.

Apr 22
Manu <turkeyman gmail.com> writes:
On Wed, Apr 22, 2020 at 11:50 PM WebFreak001 via Digitalmars-d <
digitalmars-d puremagic.com> wrote:

On Wednesday, 22 April 2020 at 12:19:25 UTC, rikki cattermole
wrote:
Change it to something like .unpackSequence instead of ... and
I would be happy.

I'm for some other thing than ... too, in C++ it's a quite bad
syntax

Negative, it's the single best thing that happened to C++ in almost 2

and it can become very ugly to maintain with all the
different combinations you could use it with from a compiler
viewpoint.

Actually, the implementation turned out to be 100x more simple and sane
than I expected going in.

Additionally if you do miss some combinations or allow
more complex operators it will quickly become a mess of "why does
this compile but why doesn't this compile" or "why does this do
something different"

If you're asking the question "why does this do something different", then
you exposed an edge case that should be considered a bug.
This has uniform application.

I'm especially not a fan of allowing all of
(Tup*10)...  -->  ( Tup[0]*10, Tup[1]*10, ... , Tup[$-1]*10 ) and (Tup1*Tup2)... and myArr[Tup + 1]... -> myArr[Tup[0] + 1], myArr[Tup[1] + 1], myArr[Tup[2] + 1] I don't understand your criticism... this is exactly the point of the DIP. would this be valid: f!T...(x) or rather this? f!T(x)... Well they're both syntactically possible, but the first one is a nonsense expression; you're trying to call a TupleExp, which I expect would be a compile error. The second is a tremendously useful pattern. what does (cast(Tup)x)... evaluate to? (cast(Tup[0])x, cast(Tup[1])x, cast(Tup[2])x, ...) is f(Tup) and f(Tup...) now doing the same thing? Yes, the identity expansion is a no-op. This is perfectly reasonable and a very nice property of the DIP. if f(Items.x...) works then what about f(Items.MemberTup...) ? We have encountered this, and defining this is the only WIP detail I'm aware of. If we follow normal D rules where tuples in tuples flatten, then the natural result with no special-case intervention is: f( Items[0].MemberTup[0], Items[0].MemberTup[1], Items[1].MemberTup[0], Items[1].MemberTup[1], ... ) I'm not sure that's particularly useful, but that's what naturally falls from D's tuple auto-expansion rules. We're discussing this case, and if breaking from the natural semantics to implement a special case is worth the complexity in the spec. My feeling is that it is not worth a special-case in the spec, and the (probably not useful) expansion I show above would be the natural language rule.  Apr 22 Steven Schveighoffer <schveiguy gmail.com> writes: On 4/22/20 9:45 AM, WebFreak001 wrote: On Wednesday, 22 April 2020 at 12:19:25 UTC, rikki cattermole wrote: Change it to something like .unpackSequence instead of ... and I would be happy. I'm for some other thing than ... too, in C++ it's a quite bad syntax and it can become very ugly to maintain with all the different combinations you could use it with from a compiler viewpoint. Additionally if you do miss some combinations or allow more complex operators it will quickly become a mess of "why does this compile but why doesn't this compile" or "why does this do something different" I'm especially not a fan of allowing all of (Tup*10)... --> ( Tup[0]*10, Tup[1]*10, ... , Tup[$-1]*10 )
and
(Tup1*Tup2)...
and
myArr[Tup + 1]... -> myArr[Tup[0] + 1], myArr[Tup[1] + 1], myArr[Tup[2]
+ 1]

would this be valid:
f!T...(x)

It would expand to:

(f!T[0], f!(T[1]), ... , f!(T[n]))(x)

I don't think this would compile.

or rather this?
f!T(x)...

That would work, you would have:

(f!(T[0])(x), f!(T[1])(x), ..., f!(T[n])(x))

what does (cast(Tup)x)... evaluate to?

(cast(Tup[0])x, cast(Tup[1])x, ..., cast(Tup[n])x);

is f(Tup) and f(Tup...) now doing the same thing?

Yes, using Tup... is the same as Tup. Only inside expressions that use
Tup would the ... make sense.

if
f(Items.x...)
f(Items.MemberTup...) ?

I would expect an expansion as Manu described, where the natural
expansion of Items becomes Items[0].MemberTup, Items[1].MemberTup, ...

However, there are some ambiguities that I'm not sure have been solved.
What about templates that expand to tuples? Are the results of those
tuples part of the ... expansion?

alias F!(T...) = AliasSeq!(T, T);

Consider this expansion:

alias t = AliasSeq!(int, char);

F!(F!t))...

Does it mean:

F!(F!int), F!(F!char)) => int, int, int, int, char, char, char, char

or does it mean:

F!(AliasSeq!(int, char, int, char))... => int, char, int, char, int,
char, int, char

In other words, what if part of the expression *creates* a tuple. Is
that the thing that gets expanded? I would have to say no, right?
Otherwise, the whole thing might be expanded and the ... trivially applied.

So that means things like (assuming G does something similar but not
identical to F:

F!(G!int)... would have to be the same as F!(G!int). This might be very
confusing to someone expecting the inner tuple to be done before the
expansion is considered.

I don't know how to DIP-ify this idea. But it definitely needs to be

-Steve

Apr 22
Manu <turkeyman gmail.com> writes:
On Thu, Apr 23, 2020 at 12:55 AM Steven Schveighoffer via Digitalmars-d <
digitalmars-d puremagic.com> wrote:

On 4/22/20 9:45 AM, WebFreak001 wrote:
if
f(Items.x...)
f(Items.MemberTup...) ?

I would expect an expansion as Manu described, where the natural
expansion of Items becomes Items[0].MemberTup, Items[1].MemberTup, ...

And of course, tuples in tuples auto-flatten according to current D
semantics.

However, there are some ambiguities that I'm not sure have been solved.
What about templates that expand to tuples? Are the results of those
tuples part of the ... expansion?

alias F!(T...) = AliasSeq!(T, T);

Consider this expansion:

alias t = AliasSeq!(int, char);

F!(F!t))...

The expansion is evaluated from the leaf upwards... your question is valid,
and this is the case we've discussed quite a lot.
I'm basically just interested to run it and see what happens naturally!

Does it mean:
F!(F!int), F!(F!char)) => int, int, int, int, char, char, char, char

or does it mean:

F!(AliasSeq!(int, char, int, char))... => int, char, int, char, int,
char, int, char

In other words, what if part of the expression *creates* a tuple. Is
that the thing that gets expanded? I would have to say no, right?
Otherwise, the whole thing might be expanded and the ... trivially applied.

So that means things like (assuming G does something similar but not
identical to F:

F!(G!int)... would have to be the same as F!(G!int). This might be very
confusing to someone expecting the inner tuple to be done before the
expansion is considered.

I don't know how to DIP-ify this idea. But it definitely needs to be

I have thought about how to discuss this in the DIP; I describe the
semantic, and what happens is what happens.
This will work, and something will happen... when we implement
TemplateInstance, we'll find out exactly what it is :P
What I will do is show what such a nested tuple does when code works in the
DIP to instantiate TemplateInstances.

It's basically the same thing as what you showed above with:
Items[0].MemberTup, Items[1].MemberTup, ...
In general, in D currently, nested tuples flatten. Evaluate from the leaf

Apr 22
Steven Schveighoffer <schveiguy gmail.com> writes:
On 4/22/20 11:17 AM, Manu wrote:

I have thought about how to discuss this in the DIP; I describe the
semantic, and what happens is what happens.
This will work, and something will happen... when we implement
TemplateInstance, we'll find out exactly what it is :P
What I will do is show what such a nested tuple does when code works in
the DIP to instantiate TemplateInstances.

It's basically the same thing as what you showed above with:
Items[0].MemberTup, Items[1].MemberTup, ...
In general, in D currently, nested tuples flatten. Evaluate from the

I think this is more complicated than the MemberTup thing. By inference
of the name, MemberTup is a tuple, but only defined in the context of
the expanded items. There aren't any tuples for the compiler to expand
in there.

A template that returns a tuple based on it's parameters is a tuple with
or without expansion.

F!(F!t)) is valid. It's going to return int, char, int, char, int, char,
int, char

F!(F!t))... what does this do?

Saying "let's see what happens" is not a good way to create a DIP. We
have enough of "design by implementation" in D.

-Steve

Apr 22
Manu <turkeyman gmail.com> writes:
On Thu, Apr 23, 2020 at 2:10 AM Steven Schveighoffer via Digitalmars-d <
digitalmars-d puremagic.com> wrote:

On 4/22/20 11:17 AM, Manu wrote:
I have thought about how to discuss this in the DIP; I describe the
semantic, and what happens is what happens.
This will work, and something will happen... when we implement
TemplateInstance, we'll find out exactly what it is :P
What I will do is show what such a nested tuple does when code works in
the DIP to instantiate TemplateInstances.

It's basically the same thing as what you showed above with:
Items[0].MemberTup, Items[1].MemberTup, ...
In general, in D currently, nested tuples flatten. Evaluate from the

I think this is more complicated than the MemberTup thing. By inference
of the name, MemberTup is a tuple, but only defined in the context of
the expanded items. There aren't any tuples for the compiler to expand
in there.

A template that returns a tuple based on it's parameters is a tuple with
or without expansion.

F!(F!t)) is valid. It's going to return int, char, int, char, int, char,
int, char

F!(F!t))... what does this do?

Saying "let's see what happens" is not a good way to create a DIP. We
have enough of "design by implementation" in D.

I only say that because;
1. It was 2am, and I had to think it through.
2. It will just apply the specification I've described in the DIP, absent
of special-cases.

I expect it will do this:

F!(F!t)... =>

expand for t (at leaf of tree):
F!( (F!t[0], F!t[1]) )  ~=   F!( (F!int, F!char) )  =>

Spec does not say it will NOW evaluate the template and operate on the
result, it deals with the expression as stated.
If it didn't behave that way, it would be very easy to lead to recursive
expansion expressions, and impossible to reason about the expansion when it
disappears inside of code that's defined elsewhere.

Expansion applies to the expression as-written.

So, take the tuple from expanding t, and expand the next level:

( F!( (F!int, F!char)[0] ), F!( (F!int, F!char)[1] ) )  ~=    ( F!( F!int
), F!( F!char ) )

I think that's all the tuples in the expression, now semantic will run as
usual, and it will resolve those templates:

Resolve inner F:

( F!(int, int), F!(char, char) )

And outer F:

( int, int, int, int, char, char, char, char )

That's the expansion I would expect from that expression. So, I guess the
point you want to determine is that *evaluating* templates is NOT part of
tuple expansion.
Template resolution will follow normally in semantic. The expansion applies
to the expression as written, and I think that's the only possible
definition, otherwise the expansion starts to recurse inside of
implementation code which is defined elsewhere.

I think that expansion is actually reasonable and 'easy' to understand.
There's nothing unexpected about application of the stated rules.
Of course, I would suggest not writing code like this, unless it's really
clear to the reader what your intent was. There's a million-and-one ways to
write obscure code that does something, but doesn't really help the reader
along the way; this expression is one such thing.

Apr 22
Steven Schveighoffer <schveiguy gmail.com> writes:
On 4/22/20 8:13 PM, Manu wrote:
On Thu, Apr 23, 2020 at 2:10 AM Steven Schveighoffer via Digitalmars-d
<digitalmars-d puremagic.com <mailto:digitalmars-d puremagic..com>> wrote:

On 4/22/20 11:17 AM, Manu wrote:
>
> I have thought about how to discuss this in the DIP; I describe the
> semantic, and what happens is what happens.
> This will work, and something will happen... when we implement
> TemplateInstance, we'll find out exactly what it is :P
> What I will do is show what such a nested tuple does when code
works in
> the DIP to instantiate TemplateInstances.
>
> It's basically the same thing as what you showed above with:
> Items[0].MemberTup, Items[1].MemberTup, ...
> In general, in D currently, nested tuples flatten. Evaluate from the

I think this is more complicated than the MemberTup thing. By inference
of the name, MemberTup is a tuple, but only defined in the context of
the expanded items. There aren't any tuples for the compiler to expand
in there.

A template that returns a tuple based on it's parameters is a tuple
with
or without expansion.

F!(F!t)) is valid. It's going to return int, char, int, char, int,
char,
int, char

F!(F!t))... what does this do?

I expect it will do this:

F!(F!t)... =>

expand for t (at leaf of tree):
F!( (F!t[0], F!t[1]) )  ~= F!( (F!int, F!char) )  =>

OK, that is what I thought too (that is the most useful), but this needs
to be explicit in the DIP.

Remember that templates can also be tuples too, so when it says " for
any tuples present in the expression tree", it reads ambiguous.

I will bring up again something like this, which someone might expect to
work:

alias G(T) = const(T);

alias F(T) = AliasSeq!(T, T);

alias f = F!(int);
G!(f)...; // seems cool to me
G!(F!int)...; // compiler error or works?
G!(AliasSeq!(int, char))...; // error or works?

Spec does not say it will NOW evaluate the template and operate on the
result, it deals with the expression as stated.

I'm not expecting multiple expansions, but one has to remember that D is
full of templates that create tuples, so the DIP has to say at what
point "these tuples are generated and considered before the expansion"
and "these are not". It seems to me you are saying only tuples that
exist BEFORE the expression are considered. Something like that should
be in the DIP, with appropriate examples.

That's the expansion I would expect from that expression. So, I guess
the point you want to determine is that *evaluating* templates is NOT
part of tuple expansion.

A good way to say it, but still needs examples in the DIP to clarify. To
elaborate, you might say:

"expansion is performed only on tuples represented by symbols in the
expression. All template instantiations that generate tuples are
performed after expansion is finished."

I think that expansion is actually reasonable and 'easy' to understand.
There's nothing unexpected about application of the stated rules.
Of course, I would suggest not writing code like this, unless it's
million-and-one ways to write obscure code that does something, but
doesn't really help the reader along the way; this expression is one
such thing.

It's easy to understand, but it's also easy to expect the compiler to
understand what you were thinking when you wrote:

foo!(AliasSeq!(int, char))...

instead of the (required) long form:

alias args = AliasSeq!(int, char);
foo!(args)...

proper tuples would make this much less painful...

-Steve

Apr 22
Manu <turkeyman gmail.com> writes:
On Thu, Apr 23, 2020 at 2:50 PM Steven Schveighoffer via Digitalmars-d <
digitalmars-d puremagic.com> wrote:

On 4/22/20 8:13 PM, Manu wrote:
On Thu, Apr 23, 2020 at 2:10 AM Steven Schveighoffer via Digitalmars-d
<digitalmars-d puremagic.com <mailto:digitalmars-d puremagic..com>>

wrote:
On 4/22/20 11:17 AM, Manu wrote:
>
> I have thought about how to discuss this in the DIP; I describe

the
> semantic, and what happens is what happens.
> This will work, and something will happen... when we implement
> TemplateInstance, we'll find out exactly what it is :P
> What I will do is show what such a nested tuple does when code
works in
> the DIP to instantiate TemplateInstances.
>
> It's basically the same thing as what you showed above with:
> Items[0].MemberTup, Items[1].MemberTup, ...
> In general, in D currently, nested tuples flatten. Evaluate from

the

I think this is more complicated than the MemberTup thing. By

inference
of the name, MemberTup is a tuple, but only defined in the context of
the expanded items. There aren't any tuples for the compiler to

expand
in there.

A template that returns a tuple based on it's parameters is a tuple
with
or without expansion.

F!(F!t)) is valid. It's going to return int, char, int, char, int,
char,
int, char

F!(F!t))... what does this do?

I expect it will do this:

F!(F!t)... =>

expand for t (at leaf of tree):
F!( (F!t[0], F!t[1]) )  ~= F!( (F!int, F!char) )  =>

OK, that is what I thought too (that is the most useful), but this needs
to be explicit in the DIP.

Remember that templates can also be tuples too, so when it says " for
any tuples present in the expression tree", it reads ambiguous.

I will bring up again something like this, which someone might expect to
work:

alias G(T) = const(T);

alias F(T) = AliasSeq!(T, T);

alias f = F!(int);
G!(f)...; // seems cool to me
G!(F!int)...; // compiler error or works?
G!(AliasSeq!(int, char))...; // error or works?

Spec does not say it will NOW evaluate the template and operate on the
result, it deals with the expression as stated.

I'm not expecting multiple expansions, but one has to remember that D is
full of templates that create tuples, so the DIP has to say at what
point "these tuples are generated and considered before the expansion"
and "these are not". It seems to me you are saying only tuples that
exist BEFORE the expression are considered. Something like that should
be in the DIP, with appropriate examples.

Yes, these are very good points. Thanks for spelling this out clearly.

In the event you want the behaviour where the template resolution is
expanded, I reckon people would expect this is the natural solution:
alias Tup = TupleFromTemplate!(Instantiation, Args);
(Tup + expr)...

In this expression, Tup would expand, because the instantiation that
created the tuple is not involved in the expression being expanded.
I have to double-check, but I expect that's precisely how the code works
naturally with no intervention. AliasSeq!() itself depends on this
behaviour to work right now in our unittests.

So, if you find yourself in a situation where you want to expand a tuple
into a template instantiation, and that instantiation resolves to a tuple
which you want to further expand, you just need to break it into 2 lines.
That will also have the nice side-effect of being much clearer to read.

That's the expansion I would expect from that expression. So, I guess
the point you want to determine is that *evaluating* templates is NOT
part of tuple expansion.

A good way to say it, but still needs examples in the DIP to clarify. To
elaborate, you might say:

"expansion is performed only on tuples represented by symbols in the
expression. All template instantiations that generate tuples are
performed after expansion is finished."

Yes, this reads well, thanks again. I was struggling to imagine simple
language to describe this behaviour.

I think that expansion is actually reasonable and 'easy' to understand.
There's nothing unexpected about application of the stated rules.
Of course, I would suggest not writing code like this, unless it's
million-and-one ways to write obscure code that does something, but
doesn't really help the reader along the way; this expression is one
such thing.

It's easy to understand, but it's also easy to expect the compiler to
understand what you were thinking when you wrote:

foo!(AliasSeq!(int, char))...

instead of the (required) long form:

alias args = AliasSeq!(int, char);
foo!(args)...

proper tuples would make this much less painful...

You're exactly correct, the first instantiation needs to be broken out to a
separate line.
Exactly. I hope this feature might motivate renewed interest in first-class
tuples, and that would improve this situation.

Apr 22
Dave Chapman <donte5379 comcast.net> writes:
On Wednesday, 22 April 2020 at 12:19:25 UTC, rikki cattermole
wrote:
Change it to something like .unpackSequence instead of ... and
I would be happy.

Since ... is not alphabetic it is hard to search for. If you do
this DIP with the ... syntax please put some effort into making
it easy to find. For instance if I type "what does ... mean?" and
select "entire site" then the search should return this use of
... on the first page of results. And if this syntax is permanent
then anyone writing a book should have ... in the index.

Apr 22
Steven Schveighoffer <schveiguy gmail.com> writes:
On 4/22/20 8:04 AM, Manu wrote:
We have a compile time problem, and this is basically the cure.
Intuitively, people imagine CTFE is expensive (and it kinda is), but
really, the reason our compile times are bad is template instantiation.

This DIP single-handedly fixes compile-time issues in programs I've
written by reducing template instantiations by near-100%, in particular,
the expensive ones; recursive instantiations, usually implementing some
form of static map.

https://github.com/dlang/DIPs/pull/188

This is an RFC on a draft, but I'd like to submit it with a reference
implementation soon.

Stefan Koch has helped me with a reference implementation, which has so
far gone surprisingly smoothly, and has shown 50x improvement in compile
times in some artificial tests.

Yes please! Where is the reference implementation? I want to try some
things out.

I expect much greater improvements in situations where recursive
template expansion reaches a practical threshold due to quadratic
resource consumption used by recursive expansions (junk template
instantiations, and explosive symbol name lengths).
This should also drastically reduce compiler memory consumption in
meta-programming heavy applications..

We have a project that I had to completely refactor because the memory
consumption was so great during compile time, that I ran out of memory
on a 12GB virtual machine on my 16GB macbook. The refactoring made a
difference, but now it's back up and I needed to buy a new system with
32GB of RAM just to build. I understand Weka had similar issues, I
wonder if anyone on that team can elaborate whether this might help fix
those problems.

But I want to see if it actually would fix the problems. It's still a
good change, but I'm not sure it will be the solution to all these.

In addition to that, it's simple, terse, and reduces program logic
indirection via 'utility' template definitions, which I find improves

There were some posts on "how do I do this in D" that used C++ parameter
pack expansion that just weren't possible or horrible to implement in D.

I wonder if your DIP can solve them? I think it can.

e.g.: https://forum.dlang.org/post/ymlqbjblbjxoitoctevl forum.dlang.org

I think can be solved just like C++:

int double_int(int val) { return 2 * val; }

T double_int(T val) { return val; }

void double_ints(alias pred, T... args) {
pred(double_int(args)...);
}

That's awesome.

In my dconf talk last year, I talked about how D's metaprogramming was
its crown jewel, and we should prioritize anything that makes this more
usable/possible. This is the kind of stuff I was talking about. Well done!

-Steve

Apr 22
Manu <turkeyman gmail.com> writes:
On Wed, Apr 22, 2020 at 11:20 PM Steven Schveighoffer via Digitalmars-d <
digitalmars-d puremagic.com> wrote:

On 4/22/20 8:04 AM, Manu wrote:
We have a compile time problem, and this is basically the cure.
Intuitively, people imagine CTFE is expensive (and it kinda is), but
really, the reason our compile times are bad is template instantiation.

This DIP single-handedly fixes compile-time issues in programs I've
written by reducing template instantiations by near-100%, in particular,
the expensive ones; recursive instantiations, usually implementing some
form of static map.

https://github.com/dlang/DIPs/pull/188

This is an RFC on a draft, but I'd like to submit it with a reference
implementation soon.

Stefan Koch has helped me with a reference implementation, which has so
far gone surprisingly smoothly, and has shown 50x improvement in compile
times in some artificial tests.

Yes please! Where is the reference implementation? I want to try some
things out.

The link is in the DIP.
Most tests we've done are working, except template instantiation expansion
is not yet implemented: ie, Thing!(Tuple, x, y)...
That's got a lot of implementation baggage attached to it in DMD.

I expect much greater improvements in situations where recursive
template expansion reaches a practical threshold due to quadratic
resource consumption used by recursive expansions (junk template
instantiations, and explosive symbol name lengths).
This should also drastically reduce compiler memory consumption in
meta-programming heavy applications..

We have a project that I had to completely refactor because the memory
consumption was so great during compile time, that I ran out of memory
on a 12GB virtual machine on my 16GB macbook. The refactoring made a
difference, but now it's back up and I needed to buy a new system with
32GB of RAM just to build. I understand Weka had similar issues, I
wonder if anyone on that team can elaborate whether this might help fix
those problems.

But I want to see if it actually would fix the problems. It's still a
good change, but I'm not sure it will be the solution to all these.

I expect you won't be able to run practical tests on real code yet without
TemplateInstance expansion.
The problem is that existing code is factored exclusively into
template instantiations, so a trivial experiment will apply to existing
code in that form.

In addition to that, it's simple, terse, and reduces program logic
indirection via 'utility' template definitions, which I find improves

There were some posts on "how do I do this in D" that used C++ parameter
pack expansion that just weren't possible or horrible to implement in D.

Yup. Now they're possible and awesome!

I wonder if your DIP can solve them? I think it can.
e.g.: https://forum.dlang.org/post/ymlqbjblbjxoitoctevl forum.dlang.org

Yes, it's basically written specifically to solve that problem! :)

I think can be solved just like C++:
int double_int(int val) { return 2 * val; }

T double_int(T val) { return val; }

void double_ints(alias pred, T... args) {
pred(double_int(args)...);
}

That's awesome.

Yes, something like that.

In my dconf talk last year, I talked about how D's metaprogramming was
its crown jewel, and we should prioritize anything that makes this more
usable/possible. This is the kind of stuff I was talking about. Well done!

Thanks!

Apr 22
Steven Schveighoffer <schveiguy gmail.com> writes:
On 4/22/20 9:35 AM, Manu wrote:
On Wed, Apr 22, 2020 at 11:20 PM Steven Schveighoffer via Digitalmars-d
<digitalmars-d puremagic.com <mailto:digitalmars-d puremagic.com>> wrote:

On 4/22/20 8:04 AM, Manu wrote:
> We have a compile time problem, and this is basically the cure.
> Intuitively, people imagine CTFE is expensive (and it kinda is), but
> really, the reason our compile times are bad is
template instantiation.
>
> This DIP single-handedly fixes compile-time issues in programs I've
> written by reducing template instantiations by near-100%, in
particular,
> the expensive ones; recursive instantiations, usually
implementing some
> form of static map.
>
> https://github.com/dlang/DIPs/pull/188
>
> This is an RFC on a draft, but I'd like to submit it with a
reference
> implementation soon.
>
> Stefan Koch has helped me with a reference implementation, which
has so
> far gone surprisingly smoothly, and has shown 50x improvement in
compile
> times in some artificial tests.

Yes please! Where is the reference implementation? I want to try some
things out.

The link is in the DIP.

Oops, it's at the top, I missed that. Thanks.

Most tests we've done are working, except template instantiation
expansion is not yet implemented: ie, Thing!(Tuple, x, y)...
That's got a lot of implementation baggage attached to it in DMD.

Ugh, I think I won't be able to test then, most of this stuff works with
lists of types, not values.

And come to think of it, staticMap works, but I need Filter as well. I
suppose it can help a bit, but I still am going to have tons of
"temporary" templates that are going to clog up the memory.

Stefan, where's that implementation of first class types for CTFE-only
functions you promised? ;)

I expect you won't be able to run practical tests on real code yet
without TemplateInstance expansion.
The problem is that existing code is factored exclusively into
template instantiations, so a trivial experiment will apply to existing
code in that form.

The trivial experiment to test is to take a list of types with possible
duplicates and remove all duplicates. It doesn't have to be in any order.

I'm hoping something like this might work:

template NewFilter(pred, T...)
{
template process(U) {
static if(pred!U) alias process = U;
else alias process = AliasSeq!();
}
alias NewFilter = AliasSeq!(process!(T)...); // do I need the
AliasSeq here?
}

template RemoveDuplicates(T...)
{
template keepIt(U) {
enum isSame(V) = __traits(isSame, U, V);
enum keepIt = NewFilter!(isSame, T).length  == 1;
}
alias RemoveDuplicates = NewFilter!(keepIt, T);
}

Will this work better? You will still have a bunch of NewFilter,
process, keepIt, and isSame instantiations, all with horribly long
symbol names. But of note is there is no recursive instantiation patterns.

One thing I noticed, in order to use a property on a static map
expansion (i.e. call a function with the resulting sequence, or access
.length), you will need extra parentheses to distinguish the ... token
from the . token.

Is there an easier/better way to do this with the new feature?

-Steve

Apr 22
Manu <turkeyman gmail.com> writes:
On Thu, Apr 23, 2020 at 12:05 AM Steven Schveighoffer via Digitalmars-d <
digitalmars-d puremagic.com> wrote:

On 4/22/20 9:35 AM, Manu wrote:
On Wed, Apr 22, 2020 at 11:20 PM Steven Schveighoffer via Digitalmars-d
<digitalmars-d puremagic.com <mailto:digitalmars-d puremagic.com>>

wrote:
On 4/22/20 8:04 AM, Manu wrote:
> We have a compile time problem, and this is basically the cure.
> Intuitively, people imagine CTFE is expensive (and it kinda is),

but
> really, the reason our compile times are bad is
template instantiation.
>
> This DIP single-handedly fixes compile-time issues in programs

I've
> written by reducing template instantiations by near-100%, in
particular,
> the expensive ones; recursive instantiations, usually
implementing some
> form of static map.
>
> https://github.com/dlang/DIPs/pull/188
>
> This is an RFC on a draft, but I'd like to submit it with a
reference
> implementation soon.
>
> Stefan Koch has helped me with a reference implementation, which
has so
> far gone surprisingly smoothly, and has shown 50x improvement in
compile
> times in some artificial tests.

Yes please! Where is the reference implementation? I want to try some
things out.

The link is in the DIP.

Oops, it's at the top, I missed that. Thanks.

Most tests we've done are working, except template instantiation
expansion is not yet implemented: ie, Thing!(Tuple, x, y)...
That's got a lot of implementation baggage attached to it in DMD.

Ugh, I think I won't be able to test then, most of this stuff works with
lists of types, not values.

Types work now, it just depends what you do with them.
For instance, cast(TypeTup)ValueTup works right now.

And come to think of it, staticMap works, but I need Filter as well. I
suppose it can help a bit, but I still am going to have tons of
"temporary" templates that are going to clog up the memory.

I'd like to see some of your use cases. I do expect you'll need some shim
templates in some cases (ie, filter), but the major difference is that they
won't tend to be recursive expansions. Recursive expansions have quadratic
world.

Fold/reduce operations can be proposed similarly to this as a follow up. It
might be possible to specify a filter combining a map + fold.

Stefan, where's that implementation of first class types for CTFE-only
functions you promised? ;)

foundation for some of that work.
Type functions is a pretty big spec challenge. This will solve a huge
amount of perf issues, simplify code, and it's a very simple addition
relatively speaking.

Another development that would benefit us hugely would be first-class
tuples which Timon (I think?) has been working on for some time.
That would eliminate the awkward AliasSeq hack, and simplify the
implementation. In a first-class tuple world, we may be able to nest
tuples, whereas today, nested AliasSeq flatten.

I expect you won't be able to run practical tests on real code yet
without TemplateInstance expansion.
The problem is that existing code is factored exclusively into
template instantiations, so a trivial experiment will apply to existing
code in that form.

The trivial experiment to test is to take a list of types with possible
duplicates and remove all duplicates. It doesn't have to be in any order.

Probably not possible with nothing but a map operation. There will be
additional logic, but an efficient map should improve the perf
substantially.

I'm hoping something like this might work:
template NewFilter(pred, T...)
{
template process(U) {
static if(pred!U) alias process = U;
else alias process = AliasSeq!();
}
alias NewFilter = AliasSeq!(process!(T)...); // do I need the
AliasSeq here?
}

This should work. No, you don't need the AliasSeq.

template RemoveDuplicates(T...)
{
template keepIt(U) {
enum isSame(V) = __traits(isSame, U, V);
enum keepIt = NewFilter!(isSame, T).length  == 1;
}
alias RemoveDuplicates = NewFilter!(keepIt, T);
}

I think efficient implementation here would depend on a static fold, which
I plan for a follow-up, and it's a very trivial expansion from this DIP.
Static reduce would allow ... as an argument to a BinOp
Ie; Tup + ... would expand Tup[0] + Tup[1] + Tup[2] + ...
You could do is(FindType == Tup) || ..., and it would evaluate true if
FindType exists in Tup, with no junk template instantiations!

Will this work better? You will still have a bunch of NewFilter,
process, keepIt, and isSame instantiations, all with horribly long
symbol names. But of note is there is no recursive instantiation patterns.

Using fold as above, you can eliminate the boilerplate.
I don't want to propose that here though. It's a relatively trivial
expansion from this initial DIP.

One thing I noticed, in order to use a property on a static map
expansion (i.e. call a function with the resulting sequence, or access
.length), you will need extra parentheses to distinguish the ... token
from the . token.

call a function with the resulting sequence

Not sure what you mean exactly?

Calling a function that receives the sequence as var args:
f(TupExpr...)

Calling a function for each element in the sequence?
f(TupExpr)...

Is there an easier/better way to do this with the new feature?

Can you show an example? Maybe the precedence is wrong?

Apr 22
Steven Schveighoffer <schveiguy gmail.com> writes:
On 4/22/20 10:37 AM, Manu wrote:
One thing I noticed, in order to use a property on a static map
expansion (i.e. call a function with the resulting sequence, or access
.length), you will need extra parentheses to distinguish the ... token
from the . token.

> call a function with the resulting sequence

Not sure what you mean exactly?

Like this works today:

AliasSeq!(1, 2, 3, 4).text;

Similarly, if you wanted to use some tuple expansion:

(transform!T...).text;

you need the parentheses, or else you have transform!T....text.

Or else if you want to get the length, you have:

(transform!T...).length

Is there an easier/better way to do this with the new feature?

Can you show an example? Maybe the precedence is wrong?

I think that question was left over from something else that I deleted
before posting, the .prop just needs clarification.

-Steve

Apr 22
Steven Schveighoffer <schveiguy gmail.com> writes:
On 4/22/20 10:37 AM, Manu wrote:
I think efficient implementation here would depend on a static fold,
which I plan for a follow-up, and it's a very trivial expansion from
this DIP.
Static reduce would allow ... as an argument to a BinOp
Ie; Tup + ... would expand Tup[0] + Tup[1] + Tup[2] + ...
You could do is(FindType == Tup) || ..., and it would evaluate true if
FindType exists in Tup, with no junk template instantiations!

This is awesome, and I'm not seeing why you would save it for later.

In general, couldn't this DIP be done just strictly on binary operators
and do what this DIP does with commas?

i.e.

foo(T) , ... expands to foo(T[0]), foo(T[1]), ..., foo(T[n])

-Steve

Apr 22
Manu <turkeyman gmail.com> writes:
On Thu, Apr 23, 2020 at 2:50 AM Steven Schveighoffer via Digitalmars-d <
digitalmars-d puremagic.com> wrote:

On 4/22/20 10:37 AM, Manu wrote:
I think efficient implementation here would depend on a static fold,
which I plan for a follow-up, and it's a very trivial expansion from
this DIP.
Static reduce would allow ... as an argument to a BinOp
Ie; Tup + ... would expand Tup[0] + Tup[1] + Tup[2] + ...
You could do is(FindType == Tup) || ..., and it would evaluate true if
FindType exists in Tup, with no junk template instantiations!

This is awesome, and I'm not seeing why you would save it for later.

Because it's strictly additive, DIP's with over-reach tend to fail.

In general, couldn't this DIP be done just strictly on binary operators
and do what this DIP does with commas?

i.e.

foo(T) , ... expands to foo(T[0]), foo(T[1]), ..., foo(T[n])

That looks really grammatically challenging to me. I wouldn't know where to
start attempting to implement that :/
The current patch in the compiler is extremely sanitary and self-contained,
but I can't imagine how to do that without making some pretty aggressive
changes to the parser. I think that's likely to have unexpected and
far-reaching side-effects.
It could be expanded to support that syntax in the future, but I'd rather
move with the simple definition I have in the DIP today. I think what you
describe above is very risky :/

Apr 22
Steven Schveighoffer <schveiguy gmail.com> writes:
On 4/22/20 8:21 PM, Manu wrote:
On Thu, Apr 23, 2020 at 2:50 AM Steven Schveighoffer via Digitalmars-d
<digitalmars-d puremagic.com <mailto:digitalmars-d puremagic..com>> wrote:

On 4/22/20 10:37 AM, Manu wrote:
> I think efficient implementation here would depend on a static fold,
> which I plan for a follow-up, and it's a very trivial expansion from
> this DIP.
> Static reduce would allow ... as an argument to a BinOp
> Ie; Tup + ... would expand Tup[0] + Tup[1] + Tup[2] + ...
> You could do is(FindType == Tup) || ..., and it would evaluate
true if
> FindType exists in Tup, with no junk template instantiations!

This is awesome, and I'm not seeing why you would save it for later.

Because it's strictly additive, DIP's with over-reach tend to fail.

Without the folding, I think this feature leaves a lot on the table. I'd
still be happy with the DIP as-is, but it obviously doesn't help me as
much in the projects I'm using.

In general, couldn't this DIP be done just strictly on binary operators
and do what this DIP does with commas?

i.e.

foo(T) , ... expands to foo(T[0]), foo(T[1]), ..., foo(T[n])

That looks really grammatically challenging to me. I wouldn't know where
to start attempting to implement that :/

Is this not the same difficulty as Tup + ... which is what you
suggested above? I thought it was the same thing.

My thought was that if you went for the binary operator implementation,
then you have one addition, instead of one now and one later, and the
binary form can capture everything this DIP captures without needing to

-Steve

Apr 22
Manu <turkeyman gmail.com> writes:
On Thu, Apr 23, 2020 at 2:55 PM Steven Schveighoffer via Digitalmars-d <
digitalmars-d puremagic.com> wrote:

On 4/22/20 8:21 PM, Manu wrote:
On Thu, Apr 23, 2020 at 2:50 AM Steven Schveighoffer via Digitalmars-d
<digitalmars-d puremagic.com <mailto:digitalmars-d puremagic..com>>

wrote:
On 4/22/20 10:37 AM, Manu wrote:
> I think efficient implementation here would depend on a static

fold,
> which I plan for a follow-up, and it's a very trivial expansion

from
> this DIP.
> Static reduce would allow ... as an argument to a BinOp
> Ie; Tup + ... would expand Tup[0] + Tup[1] + Tup[2] + ...
> You could do is(FindType == Tup) || ..., and it would evaluate
true if
> FindType exists in Tup, with no junk template instantiations!

This is awesome, and I'm not seeing why you would save it for later.

Because it's strictly additive, DIP's with over-reach tend to fail.

Without the folding, I think this feature leaves a lot on the table. I'd
still be happy with the DIP as-is, but it obviously doesn't help me as
much in the projects I'm using.

I think you'll discover that the map's are the more expensive operations.
You'll see a huge improvement from that alone.
If people get behind this DIP, I'm completely confident you'll get fold too.

In general, couldn't this DIP be done just strictly on binary
operators
and do what this DIP does with commas?

i.e.

foo(T) , ... expands to foo(T[0]), foo(T[1]), ..., foo(T[n])

That looks really grammatically challenging to me. I wouldn't know where
to start attempting to implement that :/

Is this not the same difficulty as Tup + ... which is what you
suggested above? I thought it was the same thing.

The grammar for binary expression is easy to interact with.
, separators are NOT a binary expression, they are frequently just
hard-wired into the grammar as separators, and so everywhere a comma
appears in the grammar where the value may be involved in an expansion need
to have grammatical modifications, and everywhere that a comma is not
present in the grammar where an expansion may be accepted must have a
grammatical modification to allow commas in that location too.

I haven't tried it, but the thought of hacking in such a change to the
grammar everywhere a comma appears terrifies me, resulting ambiguity feels
very likely.
Hooking a special token on the right of a binary expression should be a
1-line change however.

My thought was that if you went for the binary operator implementation,
then you have one addition, instead of one now and one later, and the
binary form can capture everything this DIP captures without needing to

static fold has a degenerate edge case (empty tuple), and handling that
case has options subject to value judgement. I don't want this DIP to get
lost in that debate. There are a few reasonable options in that case, and I
could make arguments for and against each option that I can think of.
static fold is probably useful about 10% as often as static map. This DIP
as is will make the biggest impact on compile perf and brevity and we
should take it as eagerly as we are able.

We'll do fold as an immediate follow up though. If it gets in good shape
and it looks no more likely to increase controversy before the DIP's get
far through the pipeline, then maybe we can merge the DIPs.

Apr 22
Q. Schroll <qs.il.paperinik gmail.com> writes:
On Wednesday, 22 April 2020 at 16:49:58 UTC, Steven Schveighoffer
wrote:
On 4/22/20 10:37 AM, Manu wrote:
I think efficient implementation here would depend on a static
fold, which I plan for a follow-up, and it's a very trivial
expansion from this DIP.
Static reduce would allow ... as an argument to a BinOp
Ie; Tup + ... would expand Tup[0] + Tup[1] + Tup[2] + ...
You could do is(FindType == Tup) || ..., and it would
evaluate true if FindType exists in Tup, with no junk template
instantiations!

This is awesome, and I'm not seeing why you would save it for
later.

In general, couldn't this DIP be done just strictly on binary
operators and do what this DIP does with commas?

i.e.

foo(T) , ... expands to foo(T[0]), foo(T[1]), ..., foo(T[n])

-Steve

That's a good addition. Note that C++ requires parentheses around
the fold expressions. I think that's a good thing; makes stuff
more readable. But I'd think when the fold expression is
completely inside a construct that has parentheses, the fold
expression parentheses should be optional. I.e. you'd need

bool condition = (f(tuple) && ...); // unfolds to f(tuple[0])
&& _ _ _ && f(tuple[$-1]) but you can do if (f(tuple) && ...) { /*whatever*/ } and don't need if ((f(tuple) && ...)) { /*whatever*/ } which is just unnecessary and confusing. Also note that C++ allows initial and terminal values: (1 + ... + f(tuple)) unfolds to (1 + f(tuple[0]) + _ _ _ + f(tuple[$-1])).
Some operators don't have default values and need something to
handle empty tuples. (To be clear: Exactly one expression must
contain tuples.)

May 08
On Wednesday, 22 April 2020 at 14:02:45 UTC, Steven Schveighoffer
wrote:
Stefan, where's that implementation of first class types for
CTFE-only functions you promised? ;)

As you can see I have been busy with something similar.
ctfe-only type-access or type-functions as I call them, coming
closer through the work that Manu and I are doing.

The type-functions I have in mind will essentially supersede  ...
expressions.
However ... expressions can be there in the next weeks whereas
type functions need much heavier machinery; and therefore will
take some more invasive changes to dmd which are unlikely to be
finished soon.

Cheers,
Stefan

Apr 22
"H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Wed, Apr 22, 2020 at 05:07:58PM +0000, Stefan Koch via Digitalmars-d wrote:
On Wednesday, 22 April 2020 at 14:02:45 UTC, Steven Schveighoffer wrote:

Stefan, where's that implementation of first class types for
CTFE-only functions you promised? ;)

As you can see I have been busy with something similar.
ctfe-only type-access or type-functions as I call them, coming closer
through the work that Manu and I are doing.

The type-functions I have in mind will essentially supersede  ...
expressions.
However ... expressions can be there in the next weeks whereas type
functions need much heavier machinery; and therefore will take some
more invasive changes to dmd which are unlikely to be finished soon.

[...]

This is awesome stuff.  While I'm looking forward to 1st-class types for
CTFE, I think this DIP is important for the interim, because it will
improve D's template metaprogramming, which, as some have said, is D's
crown jewel.

Without D's awesome template metaprogramming capabilities, it's a pretty
safe bet to say I wouldn't be using D today.  But the current template
expansion paradigm, which inherits from early versions of C++, simply
isn't scalable enough for non-trivial applications. This DIP, if Manu's
improvement measurements are a reliable indicator of the general case,
would make template metaprogramming *much* more attractive in D (even
more than it already is right now), and will rightly capitalize on D's
strengths to make it an even better product.  And most importantly, it
can be implemented in the near future, which is saying a lot seeing how
a lot of things in D go. :-D

And if newCTFE lands within the next year or so, the combination could
well catapult D metaprogramming to whole new levels.

*Then* when 1st class types land, it would be the next revolution. :-D

T

--
Why did the mathematician reinvent the square wheel?
Because he wanted to drive smoothly over an inverted catenary road.

Apr 22
Manu <turkeyman gmail.com> writes:
On Wed, Apr 22, 2020 at 11:35 PM Manu <turkeyman gmail.com> wrote:

On Wed, Apr 22, 2020 at 11:20 PM Steven Schveighoffer via Digitalmars-d <
digitalmars-d puremagic.com> wrote:

On 4/22/20 8:04 AM, Manu wrote:
We have a compile time problem, and this is basically the cure.
Intuitively, people imagine CTFE is expensive (and it kinda is), but
really, the reason our compile times are bad is template instantiation.

This DIP single-handedly fixes compile-time issues in programs I've
written by reducing template instantiations by near-100%, in

particular,
the expensive ones; recursive instantiations, usually implementing some
form of static map.

https://github.com/dlang/DIPs/pull/188

This is an RFC on a draft, but I'd like to submit it with a reference
implementation soon.

Stefan Koch has helped me with a reference implementation, which has so
far gone surprisingly smoothly, and has shown 50x improvement in

compile
times in some artificial tests.

Yes please! Where is the reference implementation? I want to try some
things out.

The link is in the DIP.
Most tests we've done are working, except template instantiation expansion
is not yet implemented: ie, Thing!(Tuple, x, y)...
That's got a lot of implementation baggage attached to it in DMD.

Stefan worked out how to make the TemplateInstance case work. The branch is
updated, and it's working now!!
If you wanna do some tests, it'd be really interesting to see how much it
helps your code, and also help find edge cases and bugs.


Apr 23
Walter Bright <newshound2 digitalmars.com> writes:
On 4/22/2020 6:18 AM, Steven Schveighoffer wrote:
int double_int(int val) { return 2 * val; }

T double_int(T val) { return val; }

void double_ints(alias pred, T... args) {
pred(double_int(args)...);
}

This can be done with the array syntax proposal using UFCS:

void double_ints(alias pred, T... args) {
args.pred();
}

Apr 23
Simen =?UTF-8?B?S2rDpnLDpXM=?= <simen.kjaras gmail.com> writes:
On Friday, 24 April 2020 at 04:22:47 UTC, Walter Bright wrote:
On 4/22/2020 6:18 AM, Steven Schveighoffer wrote:
int double_int(int val) { return 2 * val; }

T double_int(T val) { return val; }

void double_ints(alias pred, T... args) {
pred(double_int(args)...);
}

This can be done with the array syntax proposal using UFCS:

void double_ints(alias pred, T... args) {
args.pred();
}

And how does this work for staticMap, which is essentially the
same as double_ints above, but pred is a template?

template staticMap(alias Fn, T...) {
alias staticMap = args.// what do I put here to invoke array
syntax?
}

--
Simen

Apr 23
Steven Schveighoffer <schveiguy gmail.com> writes:
On 4/24/20 12:22 AM, Walter Bright wrote:
On 4/22/2020 6:18 AM, Steven Schveighoffer wrote:
int double_int(int val) { return 2 * val; }

T double_int(T val) { return val; }

void double_ints(alias pred, T... args) {
pred(double_int(args)...);
}

This can be done with the array syntax proposal using UFCS:

void double_ints(alias pred, T... args) {
args.pred();
}

What's the array syntax proposal? The above seems not to invoke
double_int at all.

-Steve

Apr 24
Steven Schveighoffer <schveiguy gmail.com> writes:
On 4/24/20 8:08 AM, Steven Schveighoffer wrote:
On 4/24/20 12:22 AM, Walter Bright wrote:
On 4/22/2020 6:18 AM, Steven Schveighoffer wrote:
int double_int(int val) { return 2 * val; }

T double_int(T val) { return val; }

void double_ints(alias pred, T... args) {
pred(double_int(args)...);
}

This can be done with the array syntax proposal using UFCS:

void double_ints(alias pred, T... args) {
args.pred();
}

What's the array syntax proposal? The above seems not to invoke
double_int at all.

so I think you mean this:

double_int(args).pred;

Which.... I don't know if this is what we want to support as a tuple
expansion automatically (vs. compiler error)

-Steve

Apr 24
Guillaume Piolat <first.last gmail.com> writes:
On Wednesday, 22 April 2020 at 12:04:30 UTC, Manu wrote:
We have a compile time problem, and this is basically the cure.
Intuitively, people imagine CTFE is expensive (and it kinda
is), but really, the reason our compile times are bad is
template instantiation.

I'm actively avoiding ranges and concept-like things in order to
avoid template instantiation. It doesn't make sense in the
context of D.

With new CTFE and this the sky is the limit :)

Apr 22
Sebastiaan Koppe <mail skoppe.eu> writes:
On Wednesday, 22 April 2020 at 12:04:30 UTC, Manu wrote:
[...]
We should have done this a long time ago.

- Manu

Love it. Syntax and all.

Apr 22
Panke <tobias pankrath.net> writes:
On Wednesday, 22 April 2020 at 12:04:30 UTC, Manu wrote:
We should have done this a long time ago.

- Manu

Thanks! Great work.

Apr 22
Arine <arine1283798123 gmail.com> writes:
On Wednesday, 22 April 2020 at 12:04:30 UTC, Manu wrote:
We should have done this a long time ago.

- Manu

This is going to be the first DIP in a long time I can actually
get behind. With practicality as its foundation rather than
ideology.

Apr 22
WebFreak001 <d.forum webfreak.org> writes:
On Wednesday, 22 April 2020 at 12:04:30 UTC, Manu wrote:
[...]

https://github.com/dlang/DIPs/pull/188

[...]

looking at it again, I actually quite start to like it because
when I first looked at it I thought it would start to be a C++
mess like allowing (Tuple + )... to basically work like AST
macros and sum together a tuple. But actually thinking about ...
as a unary operator like described in the DIP makes it quite
clear.

But how exactly would it handle parentheses? For the most simple
implementation I would expect (Tup + 10)[2]... to error because
it would first try to evaluate (Tup + 10) like in current D, and
thus error, and after that try to expand the Result[2]... - For
correct syntax I would think of (Tup + 10)...[2] here

If both (Tup + 10)[2]... and (Tup + 10)...[2] however are going
to perform the same thing, I can see multiple potential pitfalls
because the compiler tries to be too smart.

It also says it goes through the expr tree but recursing through
the entire expression tree might introduce unexpected issues and
would be a non-intuitive experience to the programmer. Take a
call like X!(T!Tup)... for example: if normally T!Tup expects
a Tuple as argument but if the ... would go through the entire
expression tree, it would change it to: X!(T!(Tup[0]),
T!(Tup[1]), ..., T!(Tup[$-1])) - performing unwanted expansion of T!Tup when even though I wanted X!(T!(Tup)[0]), X!(T!(Tup)[1]), ..., X!(T!(Tup)[$-1])

think it would very much make sense that it just expands to
f(Items[0].MemberTup, Items[1].MemberTup, ...) because when doing
on the whole tuple list but only on individual items.

I could actually much rather see a use-case of accessing all
members of MemberTup inside the Items tuple, which could probably
be realized using __traits(getMember, Items, MemberTup)...

Apr 23
Steven Schveighoffer <schveiguy gmail.com> writes:
On 4/23/20 8:36 AM, WebFreak001 wrote:
On Wednesday, 22 April 2020 at 12:04:30 UTC, Manu wrote:
[...]

https://github.com/dlang/DIPs/pull/188

[...]

looking at it again, I actually quite start to like it because when I
first looked at it I thought it would start to be a C++ mess like
allowing (Tuple + )... to basically work like AST macros and sum
together a tuple. But actually thinking about ... as a unary operator
like described in the DIP makes it quite clear.

But how exactly would it handle parentheses? For the most simple
implementation I would expect (Tup + 10)[2]... to error because it would
first try to evaluate (Tup + 10) like in current D, and thus error, and
after that try to expand the Result[2]... - For correct syntax I would
think of (Tup + 10)...[2] here

If are looking for the second item out of the tuple, your last
expression is correct. However, it's kind of silly not to just do:

Tup[2] + 10

The first instance is not invalid, it's going to be:

(Tup[0] + 10)[2], (Tup[1] + 10)[2], ...

which could be valid for some types (they could support opBinary!"+" and
opIndex)

If both (Tup + 10)[2]... and (Tup + 10)...[2] however are going to
perform the same thing, I can see multiple potential pitfalls because
the compiler tries to be too smart.

I think they will be different because of where the ... operator is applied.

It also says it goes through the expr tree but recursing through the
entire expression tree might introduce unexpected issues and would be a
non-intuitive experience to the programmer. Take a call like
X!(T!Tup)... for example: if normally T!Tup expects a Tuple as
argument but if the ... would go through the entire expression tree, it
would change it to: X!(T!(Tup[0]), T!(Tup[1]), ..., T!(Tup[$-1])) - performing unwanted expansion of T!Tup when even though I wanted X!(T!(Tup)[0]), X!(T!(Tup)[1]), ..., X!(T!(Tup)[$-1])

I had a discussion with Manu on this elsewhere in the thread -- I
believe he is going to update the DIP to clarify this situation. In
essence, you have to first evaluate the T!Tup expression into its tuple,
and then use the result inside the expanding expression for it to work
properly.

it would very much make sense that it just expands to
f(Items[0].MemberTup, Items[1].MemberTup, ...) because when doing an
access to a non-tuple member it's also not actually existing on the
whole tuple list but only on individual items.

Yep.

I could actually much rather see a use-case of accessing all members of
MemberTup inside the Items tuple, which could probably be realized using
__traits(getMember, Items, MemberTup)...

That would be cool. Of course, MemberTup has to be a list of names, but
a very nice feature -> convert a list of names into a list of members
without messy templates!

Manu -- another clarification in the DIP is needed here -- __traits is
not a template, but I think it should be treated similarly. That is,
__traits(allMembers, Foo)... should be equivalent to
__traits(allMembers, Foo)

-Steve

Apr 23
On Thursday, 23 April 2020 at 13:38:10 UTC, Steven Schveighoffer
wrote:
Manu -- another clarification in the DIP is needed here --
__traits is not a template, but I think it should be treated
similarly. That is, __traits(allMembers, Foo)... should be
equivalent to __traits(allMembers, Foo)

-Steve

It already is. Tuples without operations are just expanded
verbatim same as the regular expansion.
I've started collaborating on the DIP since Manu and me are in
opposite places of the earth that means 24/7 support!

Apr 23
WebFreak001 <d.forum webfreak.org> writes:
On Thursday, 23 April 2020 at 13:38:10 UTC, Steven Schveighoffer
wrote:
On 4/23/20 8:36 AM, WebFreak001 wrote:
On Wednesday, 22 April 2020 at 12:04:30 UTC, Manu wrote:
[...]

https://github.com/dlang/DIPs/pull/188

[...]

looking at it again, I actually quite start to like it because
when I first looked at it I thought it would start to be a C++
mess like allowing (Tuple + )... to basically work like AST
macros and sum together a tuple. But actually thinking about
... as a unary operator like described in the DIP makes it
quite clear.

But how exactly would it handle parentheses? For the most
simple implementation I would expect (Tup + 10)[2]... to error
because it would first try to evaluate (Tup + 10) like in
current D, and thus error, and after that try to expand the
Result[2]... - For correct syntax I would think of (Tup +
10)...[2] here

If are looking for the second item out of the tuple, your last
expression is correct. However, it's kind of silly not to just
do:

Tup[2] + 10

sure it's silly but it might be something else than [2] like
.foobar()

The first instance is not invalid, it's going to be:

(Tup[0] + 10)[2], (Tup[1] + 10)[2], ...

I don't quite see how this is? Isn't the ... working on the arary
index like in the examples? If you had a (Tup + 10)[OtherTup +
8]... wouldn't it then extend to (Tup + 10)[OtherTup[0] + 8],
(Tup + 10)[OtherTup[1] + 8], ..., (Tup + 10)[OtherTup[$- 1] + 8] ? If the ... works depending on if the identifier in there is a tuple or not it would become extremely random and no longer be representable using a context free grammar  Apr 23 Simen =?UTF-8?B?S2rDpnLDpXM=?= <simen.kjaras gmail.com> writes: On Wednesday, 22 April 2020 at 12:04:30 UTC, Manu wrote: This DIP single-handedly fixes compile-time issues in programs I've written by reducing template instantiations by near-100%, in particular, the expensive ones; recursive instantiations, usually implementing some form of static map. We should have done this a long time ago. This is beautiful and awesome (syntax and all). I was wondering if there's any way to to do a cross product with this, like fun(Xs, Ys)... expand to fun(Xs[0], Ys[0]), fun(Xs[0], Ys[1]), fun(Xs[1], Ys[0]), fun(Xs[1], Ys[1]), but that might very well be rare enough to not warrant special consideration. The other thing that worries me a little is the difference between Foo!(AliasSeq!(1,2))... and Foo!(Numbers)... - would Foo!(AliasSeq!(1,2)...)... do the same as Foo!(Numbers)...? -- Simen  Apr 23 Stefan Koch <uplink.coder gmail.com> writes: On Thursday, 23 April 2020 at 12:43:59 UTC, Simen Kjærås wrote: On Wednesday, 22 April 2020 at 12:04:30 UTC, Manu wrote: This DIP single-handedly fixes compile-time issues in programs I've written by reducing template instantiations by near-100%, in particular, the expensive ones; recursive instantiations, usually implementing some form of static map. We should have done this a long time ago. This is beautiful and awesome (syntax and all). I was wondering if there's any way to to do a cross product with this, like fun(Xs, Ys)... expand to fun(Xs[0], Ys[0]), fun(Xs[0], Ys[1]), fun(Xs[1], Ys[0]), fun(Xs[1], Ys[1]), but that might very well be rare enough to not warrant special consideration. doing the cartesian product of the tuples was in my first implementation. It's not too useful in the general case though.  Apr 23 Steven Schveighoffer <schveiguy gmail.com> writes: On 4/23/20 8:43 AM, Simen Kjærås wrote: On Wednesday, 22 April 2020 at 12:04:30 UTC, Manu wrote: This DIP single-handedly fixes compile-time issues in programs I've written by reducing template instantiations by near-100%, in particular, the expensive ones; recursive instantiations, usually implementing some form of static map. We should have done this a long time ago. This is beautiful and awesome (syntax and all). I was wondering if there's any way to to do a cross product with this, like fun(Xs, Ys)... expand to fun(Xs[0], Ys[0]), fun(Xs[0], Ys[1]), fun(Xs[1], Ys[0]), fun(Xs[1], Ys[1]), but that might very well be rare enough to not warrant special consideration. You need to do part of it with templates I think. But it should be more pleasant. I was writing something, but I haven't finished it. Will see if I can come up with it. These kinds of puzzles are fun ;) The other thing that worries me a little is the difference between Foo!(AliasSeq!(1,2))... and Foo!(Numbers)... - would Foo!(AliasSeq!(1,2)...)... do the same as Foo!(Numbers)...? No, Foo!(AliasSeq!(1, 2)...)... is equivalent to Foo!(AliasSeq!(1, 2)) Whereas if Numbers is a tuple, Foo!(Numbers)... is equivalent to Foo!(Numbers[0]), Foo!(Numbers[1]), ... -Steve  Apr 23 Stefan Koch <uplink.coder googlemail.com> writes: On Thursday, 23 April 2020 at 13:48:54 UTC, Steven Schveighoffer wrote: These kinds of puzzles are fun ;) yeah no. It's not fun if you have to make sure the rewrite is what people expect. Foo!(AliasSeq(1,2)...)... will expand to Compiler_Tuple(Foo!(1), Foo!(2))  Apr 23 Stefan Koch <uplink.coder googlemail.com> writes: On Thursday, 23 April 2020 at 13:56:47 UTC, Stefan Koch wrote: On Thursday, 23 April 2020 at 13:48:54 UTC, Steven Schveighoffer wrote: These kinds of puzzles are fun ;) yeah no. It's not fun if you have to make sure the rewrite is what people expect. Foo!(AliasSeq(1,2)...)... will expand to Compiler_Tuple(Foo!(1), Foo!(2)) to make the point clearer: ---- template AliasSeq(seq...) { alias AliasSeq = seq; } template Foo(alias X) { struct S { typeof(X) v = X; } enum Foo = S(); } ---- pragma(msg, Foo!(AliasSeq!(1, 2))...); outputs: ---- tuple(S(1), S(2)) ----  Apr 23 Manu <turkeyman gmail.com> writes: On Fri, Apr 24, 2020 at 12:00 AM Stefan Koch via Digitalmars-d < digitalmars-d puremagic.com> wrote: On Thursday, 23 April 2020 at 13:48:54 UTC, Steven Schveighoffer wrote: These kinds of puzzles are fun ;) yeah no. It's not fun if you have to make sure the rewrite is what people expect. Foo!(AliasSeq(1,2)...)... will expand to Compiler_Tuple(Foo!(1), Foo!(2)) I don't think that's true according to the current implementation. It will try and expand the expression AliasSeq!(1, 2). there are no tuples in that expression, and no expansion will take place.  Apr 23 Manu <turkeyman gmail.com> writes: On Fri, Apr 24, 2020 at 12:05 AM Manu <turkeyman gmail.com> wrote: On Fri, Apr 24, 2020 at 12:00 AM Stefan Koch via Digitalmars-d < digitalmars-d puremagic.com> wrote: On Thursday, 23 April 2020 at 13:48:54 UTC, Steven Schveighoffer wrote: These kinds of puzzles are fun ;) yeah no. It's not fun if you have to make sure the rewrite is what people expect. Foo!(AliasSeq(1,2)...)... will expand to Compiler_Tuple(Foo!(1), Foo!(2)) I don't think that's true according to the current implementation. It will try and expand the expression AliasSeq!(1, 2). there are no tuples in that expression, and no expansion will take place. Oh no, I'm wrong. I think we have a bug there... I don't think that should work that way, and I can see why it does; it's because our code sees the template argument as a tuple identifier to expand, and not as the expression being supplied. I think semantic is being called eagerly somewhere that it shouldn't...  Apr 23 Stefan Koch <uplink.coder googlemail.com> writes: On Thursday, 23 April 2020 at 13:48:54 UTC, Steven Schveighoffer wrote: No, Foo!(AliasSeq!(1, 2)...)... is equivalent to Foo!(AliasSeq!(1, 2)) In the current implementation it's a parser error.  Apr 23 WebFreak001 <d.forum webfreak.org> writes: On Thursday, 23 April 2020 at 14:06:03 UTC, Stefan Koch wrote: On Thursday, 23 April 2020 at 13:48:54 UTC, Steven Schveighoffer wrote: No, Foo!(AliasSeq!(1, 2)...)... is equivalent to Foo!(AliasSeq!(1, 2)) In the current implementation it's a parser error. I would assume that both alias Numbers = AliasSeq!(1, 2); Foo!(Numbers...) and Foo!(AliasSeq!(1, 2)...) should do the same, no? I think of something like (expr + Tuple)... as a simple map!(Item => expr + Item) - however the compiler automatically finding the Tuple and the scope of the expression really bugs me the most here because it feels so subjective. Wouldn't some syntax like Tuple->{Tuple + 4} be clearer and easier to parse for both DMD and static analysis tools? To keep it as simple as possible it would just reuse the name of the tuple as item name, but then it's basically a fancy, clear map syntax, where you exactly control the tuple that's being expanded and avoid any issues with nested tuples in the expression tree.  Apr 23 Manu <turkeyman gmail.com> writes: On Fri, Apr 24, 2020 at 12:20 AM WebFreak001 via Digitalmars-d < digitalmars-d puremagic.com> wrote: On Thursday, 23 April 2020 at 14:06:03 UTC, Stefan Koch wrote: On Thursday, 23 April 2020 at 13:48:54 UTC, Steven Schveighoffer wrote: No, Foo!(AliasSeq!(1, 2)...)... is equivalent to Foo!(AliasSeq!(1, 2)) In the current implementation it's a parser error. I would assume that both alias Numbers = AliasSeq!(1, 2); Foo!(Numbers...) and Foo!(AliasSeq!(1, 2)...) should do the same, no? Well, they both do nothing. You probably mean Foo!(Numbers)... Foo!(AliasSeq!(1, 2))... But even then, no; there's no tuple to expand in that second statement. What there is, is a template instantiation, and it _resolves_ to a tuple, but the current spec doesn't tunnel into post-evaluation tuples like that. If it did, I think it would be very unwieldy. Expansion occurs before expression evaluation... and it must, because the expression to evaluate is the result of the expansion. I think of something like (expr + Tuple)... as a simple map!(Item => expr + Item) - however the compiler automatically finding the Tuple and the scope of the expression really bugs me the most here because it feels so subjective. Wouldn't some syntax like Tuple->{Tuple + 4} be clearer and easier to parse for both DMD and static analysis tools? To keep it as simple as possible it would just reuse the name of the tuple as item name, but then it's basically a fancy, clear map syntax, where you exactly control the tuple that's being expanded and avoid any issues with nested tuples in the expression tree. What you're describing is something that starts to look like type functions, or a type lambda specifically. I'm all for type functions, but that's not what this DIP is. I'm in full support of a type functions DIP though! clearer and easier to parse for both DMD This has been surprisingly easy to implement. It's trivial to parse, and semantic has been totally self-contained. The complexity we've discovered is pretty much 100% concerning the moment that expansion is applied vs semantic being run. Expansion must apply before the expression evaluation, because the expansion affects the expression to be evaluated.  Apr 23 Manu <turkeyman gmail.com> writes: On Thu, Apr 23, 2020 at 10:45 PM Simen Kj=C3=A6r=C3=A5s via Digitalmars-d < digitalmars-d puremagic.com> wrote: On Wednesday, 22 April 2020 at 12:04:30 UTC, Manu wrote: This DIP single-handedly fixes compile-time issues in programs I've written by reducing template instantiations by near-100%, in particular, the expensive ones; recursive instantiations, usually implementing some form of static map. We should have done this a long time ago. This is beautiful and awesome (syntax and all). I was wondering if there's any way to to do a cross product with this, like fun(Xs, Ys)... expand to fun(Xs[0], Ys[0]), fun(Xs[0], Ys[1]), fun(Xs[1], Ys[0]), fun(Xs[1], Ys[1]), but that might very well be rare enough to not warrant special consideration. You can do this by expanding tuples with the appropriate indices: fun(Xs[CrossIndexX], Ys[CrossIndexY])... Where CrossIndexX is (0, 0, 1, 1) and CrossIndexY is (0, 1, 0, 1). The other thing that worries me a little is the difference between Foo!(AliasSeq!(1,2))... and Foo!(Numbers)... - would Foo!(AliasSeq!(1,2)...)... do the same as Foo!(Numbers)...? This needs to be clarified in the DIP; the AliasSeq!() instantiation there is NOT actually a tuple (yet). It's just a template instantiation expression. So, Foo!(AliasSeq!(1,2)...)... does nothing; there's no tuple in the expression to expand. If you want to expand that AliasSeq, you need the second expression you wrote: alias Numbers =3D AliasSeq!(1,2); Foo!(Numbers)... So the answer is no, they are not the same.  Apr 23 Steven Schveighoffer <schveiguy gmail.com> writes: On 4/23/20 9:53 AM, Manu wrote: You can do this by expanding tuples with the appropriate indices: fun(Xs[CrossIndexX], Ys[CrossIndexY])... Where CrossIndexX is (0, 0, 1, 1) and CrossIndexY is (0, 1, 0, 1). I don't think this works, aren't Xs and Ys tuples also? I think what you need is to expand the Xs and Ys into further tuples: alias XParams = Xs[CrossIndexX]...; alias YParams = Ys[CrossIndexY]...; fun(XParams, YParams)...; Which would work I think. this means you need 4 pre-expression declarations! Two for the X and Y expansions, and two for the indexes. -Steve  Apr 23 Manu <turkeyman gmail.com> writes: On Fri, Apr 24, 2020 at 2:30 AM Steven Schveighoffer via Digitalmars-d < digitalmars-d puremagic.com> wrote: On 4/23/20 9:53 AM, Manu wrote: You can do this by expanding tuples with the appropriate indices: fun(Xs[CrossIndexX], Ys[CrossIndexY])... Where CrossIndexX is (0, 0, 1, 1) and CrossIndexY is (0, 1, 0, 1). I don't think this works, aren't Xs and Ys tuples also? I think what you need is to expand the Xs and Ys into further tuples: alias XParams = Xs[CrossIndexX]...; alias YParams = Ys[CrossIndexY]...; fun(XParams, YParams)...; Which would work I think. this means you need 4 pre-expression declarations! Two for the X and Y expansions, and two for the indexes. -Steve Oh yeah, it was too late at night. Your solution looks right. On a slightly unrelated note; there's a really interesting idea up-thread about making nested ... not be expanded by the outer ... For instance: template T(A, B...) { ... } T!(Tup1, Tup2...)... What we're saying here is, Tup2 gets the identity expansion, but that result is NOT expanded with the outer; so the expansion above is: T!(Tup1, Tup2...)... -> ( T!(Tup1[0], Tup2...), T!(Tup1[1], Tup2...), ..., T!(Tup1[$-1], Tup2...) )

This is opposed to currently where we recurse through nested expansions,
effectively:

T!(Tup1, Tup2)...  ->  ( T!(Tup1[0], Tup2[0]), T!(Tup1[1], Tup2[1]),
...,  T!(Tup1[$-1], Tup2[$-1]) )

So, despite Tup2's expansion is the identity expansion, this syntax allows
Tup2 to be passed to T!()'s variadic args as the tuple it is, rather than
expanded like Tup.
Using ... in nested context this way gives articulate control over cases
where multiple Tuples in the tree should or shouldn't be expanded by ...

I think this is also the natural rule; it follows from the claim that
expansion is performed pre-semantic evaluation; nested ... is only a
tuple AFTER evaluation, so it seems natural that it should not be expanded
by the outer expansion.

Apr 23
Simen =?UTF-8?B?S2rDpnLDpXM=?= <simen.kjaras gmail.com> writes:
On Friday, 24 April 2020 at 00:16:52 UTC, Manu wrote:
On a slightly unrelated note; there's a really interesting idea
about making nested ... not be expanded by the outer ...
For instance:

template T(A, B...) { ... }
T!(Tup1, Tup2...)...

I'm wondering if perhaps that's the wrong way around - AliasSeq

T!(Tup1, AliasSeq!Tup2)... // Should only expand Tup1

It might be that the more useful behavior is for ...-tuples to be
expandable:

T!(AliasSeq!(int, char)..., AliasSeq!Tup2)... // Should
expand to T!(int, Tup2), T!(char, Tup2)

--
Simen

Apr 23
Steven Schveighoffer <schveiguy gmail.com> writes:
On 4/24/20 1:16 AM, Simen Kjærås wrote:
On Friday, 24 April 2020 at 00:16:52 UTC, Manu wrote:
On a slightly unrelated note; there's a really interesting idea up-thread
about making nested ... not be expanded by the outer ...
For instance:

template T(A, B...) { ... }
T!(Tup1, Tup2...)...

I'm wondering if perhaps that's the wrong way around - AliasSeq already
does that:

T!(Tup1, AliasSeq!Tup2)... // Should only expand Tup1

No, this means:

T!(Tup1[0], AliasSeq!(Tup2[0])), T!(Tup1[1], AliasSeq!(Tup2[1]), ...

It might be that the more useful behavior is for ...-tuples to be
expandable:

T!(AliasSeq!(int, char)..., AliasSeq!Tup2)... // Should expand to
T!(int, Tup2), T!(char, Tup2)

I think the suggestion quoted by Manu is more useful, we should ignore
any inner ... when expanding an outer one. This gives maximum
flexibility. If you want something that's not considered a tuple to be
considered a tuple, you have the option of declaring it into a tuple
before the expression. If you want to go the other way (prevent
expansion of a tuple with ...), without this idea there's no recourse
(well, you could do a mixin, but that would suck).

-Steve

Apr 24
Simen =?UTF-8?B?S2rDpnLDpXM=?= <simen.kjaras gmail.com> writes:
On Friday, 24 April 2020 at 12:22:51 UTC, Steven Schveighoffer
wrote:
On 4/24/20 1:16 AM, Simen Kjærås wrote:
On Friday, 24 April 2020 at 00:16:52 UTC, Manu wrote:
On a slightly unrelated note; there's a really interesting
about making nested ... not be expanded by the outer ...
For instance:

template T(A, B...) { ... }
T!(Tup1, Tup2...)...

I'm wondering if perhaps that's the wrong way around -

T!(Tup1, AliasSeq!Tup2)... // Should only expand Tup1

No, this means:

T!(Tup1[0], AliasSeq!(Tup2[0])), T!(Tup1[1],
AliasSeq!(Tup2[1]), ...

Yup, I noticed a little while ago. In my defense I had a migraine
when I wrote that. :p

--
Simen

Apr 24
Mafi <mafi example.org> writes:
On Thursday, 23 April 2020 at 12:43:59 UTC, Simen Kjærås wrote:
On Wednesday, 22 April 2020 at 12:04:30 UTC, Manu wrote:
This DIP single-handedly fixes compile-time issues in programs
I've written by reducing template instantiations by near-100%,
in particular, the expensive ones; recursive instantiations,
usually implementing some form of static map.

We should have done this a long time ago.

This is beautiful and awesome (syntax and all).

I was wondering if there's any way to to do a cross product
with this, like fun(Xs, Ys)... expand to fun(Xs[0], Ys[0]),
fun(Xs[0], Ys[1]), fun(Xs[1], Ys[0]), fun(Xs[1], Ys[1]), but
that might very well be rare enough to not warrant special
consideration.

I think ...-Expressions should first expand nested
...-Expressions (or equivalently explicitly ignore nested
...-Expressions). Then the cross-product can be expressed as:

template crossHelper(F, X, Y...) {
alias crossHelper = F(X, Y)...;
}

crossHelper!(S, X, Y...)...

=> S!(X[0], Y[0]), S!(X[1], Y[0]), ..., S!(X[n-1], X[m-1])

because the nested expression Y... is expanded first resulting in
crossHelper!(S, X, Y[0]), ..., crossHelper!(S, X, Y[m-1]) which
then has the X expanded.

Apr 23
On Thursday, 23 April 2020 at 15:06:51 UTC, Mafi wrote:
On Thursday, 23 April 2020 at 12:43:59 UTC, Simen Kjærås wrote:
[...]

I think ...-Expressions should first expand nested
...-Expressions (or equivalently explicitly ignore nested
...-Expressions). Then the cross-product can be expressed as:

template crossHelper(F, X, Y...) {
alias crossHelper = F(X, Y)...;
}

crossHelper!(S, X, Y...)...

=> S!(X[0], Y[0]), S!(X[1], Y[0]), ..., S!(X[n-1], X[m-1])

because the nested expression Y... is expanded first resulting
in crossHelper!(S, X, Y[0]), ..., crossHelper!(S, X, Y[m-1])
which then has the X expanded.

that won't work.
F is a type parameter in crossHelper.

Apr 23
pineapple <meapineapple gmail.com> writes:
I like this idea, but I don't like the syntax.

pragma(msg, (Values + OnlyTwo)...);

alias staticMap(alias F, T...) = F!T...;

The immediate impression I have reading this code is that the
... postfix operator is modifying the result of Values +
OnlyTwo or F!T, but this isn't the case.

This syntax decision would make D less accessible to new people
and would make code take just that little bit more effort to read
even for people who know the syntax.

I would suggest that something should be written postfixing or
prefixing the alias sequence itself, so that it is more clear
that the operation is changing the nature of that sequence and
how it is operated upon, rather than acting upon the result of an
operation involving that sequence.

One example to illustrate what I mean by moving the operator:

pragma(msg, (Values[...] + OnlyTwo[...]));

alias staticMap(alias F, T...) = F[...]!T;

I think something other than [...] or ... would also be fine,
it's the position specifically that I want to make a point about.

Apr 23
Mafi <mafi example.org> writes:
On Thursday, 23 April 2020 at 15:11:16 UTC, Stefan Koch wrote:
On Thursday, 23 April 2020 at 15:06:51 UTC, Mafi wrote:
On Thursday, 23 April 2020 at 12:43:59 UTC, Simen Kjærås wrote:
[...]

I think ...-Expressions should first expand nested
...-Expressions (or equivalently explicitly ignore nested
...-Expressions). Then the cross-product can be expressed as:

template crossHelper(F, X, Y...) {
alias crossHelper = F(X, Y)...;
}

crossHelper!(S, X, Y...)...

=> S!(X[0], Y[0]), S!(X[1], Y[0]), ..., S!(X[n-1], X[m-1])

because the nested expression Y... is expanded first resulting
in crossHelper!(S, X, Y[0]), ..., crossHelper!(S, X, Y[m-1])
which then has the X expanded.

that won't work.
F is a type parameter in crossHelper.

template crossHelper(alias F, X, Y...) {
alias crossHelper = F!(X, Y)...;
}

crossHelper!(S, X, Y...)...

Apr 23
Steven Schveighoffer <schveiguy gmail.com> writes:
On 4/23/20 11:39 AM, Mafi wrote:
On Thursday, 23 April 2020 at 15:11:16 UTC, Stefan Koch wrote:
On Thursday, 23 April 2020 at 15:06:51 UTC, Mafi wrote:
On Thursday, 23 April 2020 at 12:43:59 UTC, Simen Kjærås wrote:
[...]

I think ...-Expressions should first expand nested ...-Expressions
(or equivalently explicitly ignore nested ...-Expressions). Then the
cross-product can be expressed as:

template crossHelper(F, X, Y...) {
alias crossHelper = F(X, Y)...;
}

crossHelper!(S, X, Y...)...

=> S!(X[0], Y[0]), S!(X[1], Y[0]), ..., S!(X[n-1], X[m-1])

because the nested expression Y... is expanded first resulting in
crossHelper!(S, X, Y[0]), ..., crossHelper!(S, X, Y[m-1]) which then
has the X expanded.

that won't work.
F is a type parameter in crossHelper.

template crossHelper(alias F, X, Y...) {
alias crossHelper = F!(X, Y)...;
}

crossHelper!(S, X, Y...)...

In this call, Y... does nothing, it just trivially means Y, so you
really have:

crossHelper!(S, X, Y)...

Note here that X and Y are the parameters, not the inner template
instantations. Let's unconfuse this, and see what we really are talking
about. Let's say we want to do A cross B:

crossHelper!(S, A, B)...

Now, A[0] will map to X in the template, and AliasSeq!(A[1 .. $], B) will map to Y. So this is not what you want. Tuple lists automatically flatten, and there can only be one variadic. In order to capture 2 tuples as template parameters, you need a nested template. For instance: template crossHelper(alias F, X...) { template c(Y...) { // in here, you have X as a tuple and Y as a tuple } } Called like: crossHelper!(S, A).c!(B); Not as nice as one would like. But I think it would be a requirement. -Steve  Apr 23 Stefan Koch <uplink.coder googlemail.com> writes: On Thursday, 23 April 2020 at 16:40:05 UTC, Steven Schveighoffer wrote: crossHelper!(S, A).c!(B); Not as nice as one would like. But I think it would be a requirement. -Steve As long as we don't allow tuples of tuples yes. When we allow tuples of tuples many nice things can happen but it wouldn't play as nicely with current language rules.  Apr 23 Mafi <mafi example.org> writes: On Thursday, 23 April 2020 at 16:40:05 UTC, Steven Schveighoffer wrote: On 4/23/20 11:39 AM, Mafi wrote: Well, then what about: template crossHelper(alias F, X, Y...) { alias crossHelper = F!(X, Y)...; } crossHelper!(S, X, Y...)... In this call, Y... does nothing, it just trivially means Y, so you really have: This is wrong under the assumption I wrote before: nested ...-expressions are expanded first! To explain in more detail what I mean: Let A be the sequence (101, 102, 103) and B be (201, 202, 203). Then F!(A, B)... is (F!(101, 201), F!(102, 202), F!(103, 203)) analoguous to simultaneous expansion in C++ and as explained in the DIP draft. But of course: F!(A, B[0], B[1], B[2])... is (F!(101, 201, 202, 203), F!(102, 201, 202, 203), F!(103, 201, 202, 203)) So if ...-expansions are expanded inside out then: F!(A, B...)... is F!(A, B[0], B[1], B[2])... is (F!(101, 201, 202, 203), F!(102, 201, 202, 203), F!(103, 201, 202, 203)) this can exploited to implement cross product using my helper I wrote above.  Apr 23 Steven Schveighoffer <schveiguy gmail.com> writes: On 4/23/20 1:44 PM, Mafi wrote: On Thursday, 23 April 2020 at 16:40:05 UTC, Steven Schveighoffer wrote: On 4/23/20 11:39 AM, Mafi wrote: Well, then what about: template crossHelper(alias F, X, Y...) { alias crossHelper = F!(X, Y)...; } crossHelper!(S, X, Y...)... In this call, Y... does nothing, it just trivially means Y, so you really have: This is wrong under the assumption I wrote before: nested ....-expressions are expanded first! To explain in more detail what I mean: Let A be the sequence (101, 102, 103) and B be (201, 202, 203). Then F!(A, B)... is (F!(101, 201), F!(102, 202), F!(103, 203)) analoguous to simultaneous expansion in C++ and as explained in the DIP draft. But of course: F!(A, B[0], B[1], B[2])... is (F!(101, 201, 202, 203), F!(102, 201, 202, 203), F!(103, 201, 202, 203)) So if ...-expansions are expanded inside out then: F!(A, B...)... is F!(A, B[0], B[1], B[2])... is (F!(101, 201, 202, 203), F!(102, 201, 202, 203), F!(103, 201, 202, 203)) this can exploited to implement cross product using my helper I wrote above. What you are asking for is that B is expanded into "not-a-tuple" before the outer expression is expanded. I don't think that's the plan, I think SomeTuple... expands to SomeTuple. What I think the DIP is going to do is expand B first into a tuple, and then that tuple is now expanded the same as F!(A, B).... But maybe I'm wrong? For sure more clarification is needed on nested expansion. -Steve  Apr 23 Stefan Koch <uplink.coder googlemail.com> writes: On Thursday, 23 April 2020 at 18:05:07 UTC, Steven Schveighoffer wrote: On 4/23/20 1:44 PM, Mafi wrote: [...] What you are asking for is that B is expanded into "not-a-tuple" before the outer expression is expanded. I don't think that's the plan, I think SomeTuple... expands to SomeTuple. What I think the DIP is going to do is expand B first into a tuple, and then that tuple is now expanded the same as F!(A, B).... But maybe I'm wrong? For sure more clarification is needed on nested expansion. -Steve If you post a complete example I can run it, at show you what happens. I would say inner tuples expand first since that's how evaluating an expression tree works you start with the innermost expressions.  Apr 23 Steven Schveighoffer <schveiguy gmail.com> writes: On 4/23/20 2:18 PM, Stefan Koch wrote: On Thursday, 23 April 2020 at 18:05:07 UTC, Steven Schveighoffer wrote: On 4/23/20 1:44 PM, Mafi wrote: [...] What you are asking for is that B is expanded into "not-a-tuple" before the outer expression is expanded. I don't think that's the plan, I think SomeTuple... expands to SomeTuple. What I think the DIP is going to do is expand B first into a tuple, and then that tuple is now expanded the same as F!(A, B).... But maybe I'm wrong? For sure more clarification is needed on nested expansion. If you post a complete example I can run it, at show you what happens. I would say inner tuples expand first since that's how evaluating an expression tree works you start with the innermost expressions. The question is whether the expanded inner expression is considered a tuple for expansion later or not. alias tuple1 = AliasSeq!(1, 2, 3); alias tuple2 = AliasSeq!(4, 5, 6); pragma(msg, [AliasSeq!(tuple1..., tuple2)...]); If it expands tuple1 first, then expands tuple 2 without re-expanding the result of tuple1..., then it should be: [1, 2, 3, 4, 1, 2, 3, 5, 1, 2, 3, 6] If it expands both tuple1 and tuple2 together, it should be: [1, 4, 2, 5, 3, 6] -Steve  Apr 23 Stefan Koch <uplink.coder googlemail.com> writes: On Thursday, 23 April 2020 at 18:57:04 UTC, Steven Schveighoffer wrote: On 4/23/20 2:18 PM, Stefan Koch wrote: [...] The question is whether the expanded inner expression is considered a tuple for expansion later or not. alias tuple1 = AliasSeq!(1, 2, 3); alias tuple2 = AliasSeq!(4, 5, 6); pragma(msg, [AliasSeq!(tuple1..., tuple2)...]); If it expands tuple1 first, then expands tuple 2 without re-expanding the result of tuple1..., then it should be: [1, 2, 3, 4, 1, 2, 3, 5, 1, 2, 3, 6] If it expands both tuple1 and tuple2 together, it should be: [1, 4, 2, 5, 3, 6] -Steve It's the latter. you cannot re-expand.  Apr 23 Steven Schveighoffer <schveiguy gmail.com> writes: On 4/23/20 3:02 PM, Stefan Koch wrote: On Thursday, 23 April 2020 at 18:57:04 UTC, Steven Schveighoffer wrote: On 4/23/20 2:18 PM, Stefan Koch wrote: [...] The question is whether the expanded inner expression is considered a tuple for expansion later or not. alias tuple1 = AliasSeq!(1, 2, 3); alias tuple2 = AliasSeq!(4, 5, 6); pragma(msg, [AliasSeq!(tuple1..., tuple2)...]); If it expands tuple1 first, then expands tuple 2 without re-expanding the result of tuple1..., then it should be: [1, 2, 3, 4, 1, 2, 3, 5, 1, 2, 3, 6] If it expands both tuple1 and tuple2 together, it should be: [1, 4, 2, 5, 3, 6] It's the latter. you cannot re-expand. Those 2 are contradictory statements. If it's the latter, then tuple1... is expanded to a tuple that is then used in the outer expansion. -Steve  Apr 23 Stefan Koch <uplink.coder googlemail.com> writes: On Thursday, 23 April 2020 at 19:27:31 UTC, Steven Schveighoffer wrote: On 4/23/20 3:02 PM, Stefan Koch wrote: On Thursday, 23 April 2020 at 18:57:04 UTC, Steven Schveighoffer wrote: [...] It's the latter. you cannot re-expand. Those 2 are contradictory statements. If it's the latter, then tuple1... is expanded to a tuple that is then used in the outer expansion. -Steve k tuples with n elements can only expand to 1 tuple with k*n elements. They can't get more.  Apr 23 Stefan Koch <uplink.coder googlemail.com> writes: On Thursday, 23 April 2020 at 20:08:51 UTC, Stefan Koch wrote: k tuples with n elements can only expand to 1 tuple with k*n elements. They can't get more. Acutely let me be more specific. given TTup = T!(int, long, double, float) the expression cast(TTup)0... yields tuple(0, 0L, 0.0, 0.0f) given VTup = T!(1, 2, 3, 4) you can write the expression cast(TTup)VTup... which yields tuple(1, 2L, 3.0, 4.0f) should the length's of the tuple in and subexpression using multiple tuples differ, that's an error.  Apr 23 Steven Schveighoffer <schveiguy gmail.com> writes: On 4/23/20 4:08 PM, Stefan Koch wrote: On Thursday, 23 April 2020 at 19:27:31 UTC, Steven Schveighoffer wrote: On 4/23/20 3:02 PM, Stefan Koch wrote: On Thursday, 23 April 2020 at 18:57:04 UTC, Steven Schveighoffer wrote: [...] It's the latter. you cannot re-expand. Those 2 are contradictory statements. If it's the latter, then tuple1... is expanded to a tuple that is then used in the outer expansion. k tuples with n elements can only expand to 1 tuple with k*n elements. They can't get more. You have lost me. Can you go through a blow-by-blow of how this nested expansion happens? alias tuple1 = AliasSeq!(1, 2, 3); alias tuple2 = AliasSeq!(4, 5, 6) AliasSeq!(tuple1..., tuple2)... -Steve  Apr 23 Stefan Koch <uplink.coder googlemail.com> writes: On Thursday, 23 April 2020 at 20:23:03 UTC, Steven Schveighoffer wrote: On 4/23/20 4:08 PM, Stefan Koch wrote: On Thursday, 23 April 2020 at 19:27:31 UTC, Steven Schveighoffer wrote: On 4/23/20 3:02 PM, Stefan Koch wrote: On Thursday, 23 April 2020 at 18:57:04 UTC, Steven Schveighoffer wrote: [...] It's the latter. you cannot re-expand. Those 2 are contradictory statements. If it's the latter, then tuple1... is expanded to a tuple that is then used in the outer expansion. k tuples with n elements can only expand to 1 tuple with k*n elements. They can't get more. You have lost me. Can you go through a blow-by-blow of how this nested expansion happens? alias tuple1 = AliasSeq!(1, 2, 3); alias tuple2 = AliasSeq!(4, 5, 6) AliasSeq!(tuple1..., tuple2)... -Steve This nested expression does not parse. And I don't think it should. Because we cannot nest tuples in tuples.  Apr 23 Steven Schveighoffer <schveiguy gmail.com> writes: On 4/23/20 4:34 PM, Stefan Koch wrote: On Thursday, 23 April 2020 at 20:23:03 UTC, Steven Schveighoffer wrote: On 4/23/20 4:08 PM, Stefan Koch wrote: On Thursday, 23 April 2020 at 19:27:31 UTC, Steven Schveighoffer wrote: You have lost me. Can you go through a blow-by-blow of how this nested expansion happens? alias tuple1 = AliasSeq!(1, 2, 3); alias tuple2 = AliasSeq!(4, 5, 6) AliasSeq!(tuple1..., tuple2)... This nested expression does not parse. And I don't think it should. Because we cannot nest tuples in tuples. OK, so the answer is, you can only have one ... per expression? -Steve  Apr 23 12345swordy <alexanderheistermann gmail.com> writes: On Thursday, 23 April 2020 at 20:34:20 UTC, Stefan Koch wrote: Because we cannot nest tuples in tuples. Not yet anyway.  Apr 23 Manu <turkeyman gmail.com> writes: On Fri, Apr 24, 2020 at 5:05 AM Stefan Koch via Digitalmars-d < digitalmars-d puremagic.com> wrote: On Thursday, 23 April 2020 at 18:57:04 UTC, Steven Schveighoffer wrote: On 4/23/20 2:18 PM, Stefan Koch wrote: [...] The question is whether the expanded inner expression is considered a tuple for expansion later or not. alias tuple1 = AliasSeq!(1, 2, 3); alias tuple2 = AliasSeq!(4, 5, 6); pragma(msg, [AliasSeq!(tuple1..., tuple2)...]); If it expands tuple1 first, then expands tuple 2 without re-expanding the result of tuple1..., then it should be: [1, 2, 3, 4, 1, 2, 3, 5, 1, 2, 3, 6] If it expands both tuple1 and tuple2 together, it should be: [1, 4, 2, 5, 3, 6] -Steve It's the latter. you cannot re-expand. I think the former is correct though. And that's a trivial change. Our existing code will evaluate the latter, but that only happens because I put a special-case for doing that... and I think that special case is wrong. Natural semantics with no special handling is preferred. It allows much more articulate control, and creates more opportunities.  Apr 23 Walter Bright <newshound2 digitalmars.com> writes: On 4/22/2020 5:04 AM, Manu wrote: [...] Well done, Manu! This looks like a very good idea. I'm optimistic about it. Thank you for pushing this forward!  Apr 23 Manu <turkeyman gmail.com> writes: On Fri, Apr 24, 2020 at 8:05 AM Walter Bright via Digitalmars-d < digitalmars-d puremagic.com> wrote: On 4/22/2020 5:04 AM, Manu wrote: [...] Well done, Manu! This looks like a very good idea. I'm optimistic about it. Thank you for pushing this forward! =F0=9F=8E=89=F0=9F=8E=89=F0=9F=8E=89 Huzzah! The suspense is lifted!  Apr 23 Paolo Invernizzi <paolo.invernizzi gmail.com> writes: On Thursday, 23 April 2020 at 22:00:40 UTC, Walter Bright wrote: On 4/22/2020 5:04 AM, Manu wrote: [...] Well done, Manu! This looks like a very good idea. I'm optimistic about it. Thank you for pushing this forward! That made my day a sunny day! :-P  Apr 24 Walter Bright <newshound2 digitalmars.com> writes: On 4/22/2020 5:04 AM, Manu wrote: [...] Ok, I've had a chance to think about it. It's a scathingly brilliant idea! But (there's always a but!) something stuck out at me. Consider arrays: void test() { auto a = [1, 2, 3]; int[3] b = a[]*a[]; // b[0] = a[0]*a[0]; b[1] = a[1]*a[1]; b[2] = a[2]*a[2]; int[3] c = a[]*2; // c[0] = a[0]*2; c[1] = a[1]*2; c[2] = a[2]*2; } These look familiar! D tuples already use array syntax - they can be indexed and sliced. Instead of the ... syntax, just use array syntax! The examples from the DIP: ===================================== --- DIP (Tup*10)... --> ( Tup[0]*10, Tup[1]*10, ... , Tup[$-1]*10 )

--- Array syntax
Tup*10

====================================
--- DIP
alias Tup = AliasSeq!(1, 2, 3);
int[] myArr;
assert([ myArr[Tup + 1]... ] == [ myArr[Tup[0] + 1], myArr[Tup[1] + 1],
myArr[Tup[2] + 1] ]);

--- Array
alias Tup = AliasSeq!(1, 2, 3);
int[] myArr;
assert([ myArr[Tup + 1] ] == [ myArr[Tup[0] + 1], myArr[Tup[1] + 1],
myArr[Tup[2] + 1] ]);

===================================
---DIP
alias Values = AliasSeq!(1, 2, 3);
alias Types = AliasSeq!(int, short, float);
pragma(msg, cast(Types)Values...);

---Array
alias Values = AliasSeq!(1, 2, 3);
alias Types = AliasSeq!(int, short, float);
pragma(msg, cast(Types)Values);

=================================
---DIP
alias OnlyTwo = AliasSeq!(10, 20);
pragma(msg, (Values + OnlyTwo)...);

---Array
alias OnlyTwo = AliasSeq!(10, 20);
pragma(msg, Values + OnlyTwo);

The idea is simply if we have:

t op c

where t is a tuple and c is not, the result is:

tuple(t[0] op c, t[1] op c, ..., t[length - 1] op c)

For:

t1 op t2

the result is:

tuple(t1[0] op t2[0], t1[1] op t2[1], ..., t1[length - 1] op t2[length - 1])

The AST doesn't have to be walked to make this work, just do it as part of the
usual bottom-up semantic processing.

1. no new grammar
2. no new operator precedence rules
3. turn expressions that are currently errors into doing the obvious thing

Why does C++ use ... rather than array syntax? Because C++ doesn't have arrays!

Apr 23
Manu <turkeyman gmail.com> writes:
On Fri, Apr 24, 2020 at 2:20 PM Walter Bright via Digitalmars-d <
digitalmars-d puremagic.com> wrote:

On 4/22/2020 5:04 AM, Manu wrote:
[...]

Ok, I've had a chance to think about it. It's a scathingly brilliant idea!

But (there's always a but!) something stuck out at me. Consider arrays:

void test()
{
auto a = [1, 2, 3];
int[3] b = a[]*a[]; // b[0] = a[0]*a[0]; b[1] = a[1]*a[1]; b[2] =
a[2]*a[2];
int[3] c = a[]*2; // c[0] = a[0]*2; c[1] = a[1]*2; c[2] = a[2]*2;
}

These look familiar! D tuples already use array syntax - they can be
indexed and
sliced. Instead of the ... syntax, just use array syntax!

The examples from the DIP:

=====================================
--- DIP
(Tup*10)...  -->  ( Tup[0]*10, Tup[1]*10, ... , Tup[$-1]*10 ) --- Array syntax Tup*10 ==================================== --- DIP alias Tup = AliasSeq!(1, 2, 3); int[] myArr; assert([ myArr[Tup + 1]... ] == [ myArr[Tup[0] + 1], myArr[Tup[1] + 1], myArr[Tup[2] + 1] ]); --- Array alias Tup = AliasSeq!(1, 2, 3); int[] myArr; assert([ myArr[Tup + 1] ] == [ myArr[Tup[0] + 1], myArr[Tup[1] + 1], myArr[Tup[2] + 1] ]); =================================== ---DIP alias Values = AliasSeq!(1, 2, 3); alias Types = AliasSeq!(int, short, float); pragma(msg, cast(Types)Values...); ---Array alias Values = AliasSeq!(1, 2, 3); alias Types = AliasSeq!(int, short, float); pragma(msg, cast(Types)Values); ================================= ---DIP alias OnlyTwo = AliasSeq!(10, 20); pragma(msg, (Values + OnlyTwo)...); ---Array alias OnlyTwo = AliasSeq!(10, 20); pragma(msg, Values + OnlyTwo); The idea is simply if we have: t op c where t is a tuple and c is not, the result is: tuple(t[0] op c, t[1] op c, ..., t[length - 1] op c) For: t1 op t2 the result is: tuple(t1[0] op t2[0], t1[1] op t2[1], ..., t1[length - 1] op t2[length - 1]) The AST doesn't have to be walked to make this work, just do it as part of the usual bottom-up semantic processing. I thought about this, but this reaches much further than a op b . When I considered your approach, it appeared to add a lot of edges and limits on the structure of the expressions, particularly where it interacts with var-args or variadic templates. The advantage is: 1. no new grammar Fortunately, the grammar is trivial. 2. no new operator precedence rules 3. turn expressions that are currently errors into doing the obvious thing This is compelling, but I couldn't think how it can work from end to end. Why does C++ use ... rather than array syntax? Because C++ doesn't have arrays! Another reason I introduce ... is for static fold. The follow-up to this DIP would make this expression work: Tup + ... -> Tup[0] + Tup[1] + ... + Tup[$-1]

For instance, up-thread it was noted that a static-fold algorithm may
implement a find-type-in-tuple; it would look like this:
is(MyType == Types) || ...  <- evaluate true if MyType is present in
Types with no template instantiation junk.

So, the ... is deliberately intended to being additional value.

Can you show how your suggestion applies to some more complex cases (not
yet noted in the DIP).

// controlled expansion:
alias Tup = AliasSeq!(0, 1, 2);
alias Tup2 = AliasSeq!(3, 4, 5);
[ Tup, Tup2... ]...  ->   [ 0, 3, 4, 5 ],  [ 1, 3, 4, 5  ],  [ 2, 3, 4, 5 ]

// template instantiations
alias TTup = AliasSeq!(int, float, char);
MyTemplate!(Tup, TTup.sizeof...)...  ->  MyTemplate!(0, 4, 4, 1),
MyTemplate!(1, 4, 4, 1),  MyTemplate!(2, 4, 4, 1)

// replace staticMap
alias staticMap(alias F, T...) = F!T...;

// more controlled expansion, with template arg lists
AliasSeq!(10, Tup, 20)...  -> ( 10, 0, 20, 10, 1, 20, 10, 2, 20 )
AliasSeq!(10, Tup..., 20)  -> ( 10, 0, 1, 2, 20 )

// static fold (outside the scope of this DIP, but it's next in line)
Tup + ...  ->  Tup[0] + Tup[1] + ... + Tup[$-1] // static find is(MyType == Types) || ... That said, with respect to these fold expressions, it would be ideal if they applied to arrays equally as I propose to tuples.  Apr 23 Walter Bright <newshound2 digitalmars.com> writes: On 4/23/2020 10:51 PM, Manu wrote: Another reason I introduce ... is for static fold. The follow-up to this DIP would make this expression work: Tup + ... -> Tup[0] + Tup[1] + ... + Tup[$-1]

I expect static foreach can handle that. But we can dig a little deeper. D
doesn't have a special syntax to sum the elements of an array, but it can use a
library function to do it. The next observation is that to sum the elements of
a
tuple, all the tuple members need to be implicitly convertible to a single
arithmetic type. There is a way to do that:

[ Tup ]

and now the tuple is converted to an array literal, which can be summed the
same
way array literals are currently summed. I.e. no need for extra syntax.

For instance, up-thread it was noted that a static-fold algorithm may
implement
a find-type-in-tuple; it would look like this:
is(MyType == Types) || ...  <- evaluate true if MyType is present in
Types with no template instantiation junk.

So, the ... is deliberately intended to being additional value.

is(MyType in Types)

could work. No need for ...

Can you show how your suggestion applies to some more complex cases (not yet
noted in the DIP).

// controlled expansion:
alias Tup = AliasSeq!(0, 1, 2);
alias Tup2 = AliasSeq!(3, 4, 5);
[ Tup, Tup2... ]...  -> [ 0, 3, 4, 5 ], [ 1, 3, 4, 5  ], [ 2, 3, 4, 5 ]

[ Tup ~ [Tup2] ]

Though again, should be using arrays for this in the first place, not tuples.

// template instantiations
alias TTup = AliasSeq!(int, float, char);
MyTemplate!(Tup, TTup.sizeof...)...  -> MyTemplate!(0, 4, 4, 1),
MyTemplate!(1,
4, 4, 1), MyTemplate!(2, 4, 4, 1)

Although we don't have UFCS for templates, this would be a point for that:

Tup.MyTemplate!(TTup.sizeof)

Otherwise it would suffer from the bottom-up semantic analysis problem I
mention
further on.

// replace staticMap
alias staticMap(alias F, T...) = F!T...;

alias staticMap(alias F, T...) = F!T;

// more controlled expansion, with template arg lists
AliasSeq!(10, Tup, 20)...  -> ( 10, 0, 20, 10, 1, 20, 10, 2, 20 )

semantics, which is what D normally does. It relies on some top-down
modification for it, which is likely to cause all sorts of unexpected
difficulties. See the UFCS example above, where Tup is moved outside of the
argument list.

AliasSeq!(10, Tup..., 20)  -> ( 10, 0, 1, 2, 20 )

AliasSeq!(10, Tup, 20) -> ( 10, 0, 1, 2, 20 )

That said, with respect to these fold expressions, it would be ideal if they
applied to arrays equally as I propose to tuples.

There's a lot of merit to the idea of arrays and tuples using a common syntax,
which is what I'm proposing.

Apr 23
WebFreak001 <d.forum webfreak.org> writes:
On Friday, 24 April 2020 at 06:32:16 UTC, Walter Bright wrote:
[...]

// replace staticMap
alias staticMap(alias F, T...) = F!T...;

alias staticMap(alias F, T...) = F!T;

[...]

this would be a breaking change:

currently:
template F(T...)
{
}

alias staticMap(alias F, T...) = F!T;

pragma(msg, staticMap!(F, int, short).stringof);
-> F!(int, short)

but with that change suddenly:
pragma(msg, staticMap!(F, int, short).stringof);
-> SomeTuple!(F!int, F!short)

Apr 23
On Friday, 24 April 2020 at 06:37:42 UTC, WebFreak001 wrote:
On Friday, 24 April 2020 at 06:32:16 UTC, Walter Bright wrote:
[...]

// replace staticMap
alias staticMap(alias F, T...) = F!T...;

alias staticMap(alias F, T...) = F!T;

[...]

this would be a breaking change:

currently:
template F(T...)
{
}

alias staticMap(alias F, T...) = F!T;

pragma(msg, staticMap!(F, int, short).stringof);
-> F!(int, short)

but with that change suddenly:
pragma(msg, staticMap!(F, int, short).stringof);
-> SomeTuple!(F!int, F!short)

nope, compile error.
F does not have an eponymous member.

Apr 23
WebFreak001 <d.forum webfreak.org> writes:
On Friday, 24 April 2020 at 06:42:53 UTC, Stefan Koch wrote:
On Friday, 24 April 2020 at 06:37:42 UTC, WebFreak001 wrote:
On Friday, 24 April 2020 at 06:32:16 UTC, Walter Bright wrote:
[...]

// replace staticMap
alias staticMap(alias F, T...) = F!T...;

alias staticMap(alias F, T...) = F!T;

[...]

this would be a breaking change:

currently:
template F(T...)
{
}

alias staticMap(alias F, T...) = F!T;

pragma(msg, staticMap!(F, int, short).stringof);
-> F!(int, short)

but with that change suddenly:
pragma(msg, staticMap!(F, int, short).stringof);
-> SomeTuple!(F!int, F!short)

nope, compile error.
F does not have an eponymous member.

um no? try it on https://run.dlang.io/ :

template F(T...)
{
}

alias staticMap(alias F, T...) = F!T;

pragma(msg, staticMap!(F, int, short).stringof);

(link shortener was broken when I posted this)

Apr 24
Manu <turkeyman gmail.com> writes:
On Fri, Apr 24, 2020 at 4:35 PM Walter Bright via Digitalmars-d <
digitalmars-d puremagic.com> wrote:

On 4/23/2020 10:51 PM, Manu wrote:
Another reason I introduce ... is for static fold.
The follow-up to this DIP would make this expression work:

Tup + ...  ->  Tup[0] + Tup[1] + ... + Tup[$-1] I expect static foreach can handle that. But we can dig a little deeper. D doesn't have a special syntax to sum the elements of an array, but it can use a library function to do it. The next observation is that to sum the elements of a tuple, all the tuple members need to be implicitly convertible to a single arithmetic type. There is a way to do that: [ Tup ] and now the tuple is converted to an array literal, which can be summed the same way array literals are currently summed. I.e. no need for extra syntax. For instance, up-thread it was noted that a static-fold algorithm may implement a find-type-in-tuple; it would look like this: is(MyType == Types) || ... <- evaluate true if MyType is present in Types with no template instantiation junk. So, the ... is deliberately intended to being additional value. is(MyType in Types) could work. No need for ... You can imagine that the expressions could be far more elaborate than that. Can you show how your suggestion applies to some more complex cases (not yet noted in the DIP). // controlled expansion: alias Tup = AliasSeq!(0, 1, 2); alias Tup2 = AliasSeq!(3, 4, 5); [ Tup, Tup2... ]... -> [ 0, 3, 4, 5 ], [ 1, 3, 4, 5 ], [ 2, 3, 4, 5 ] [ Tup ~ [Tup2] ] Though again, should be using arrays for this in the first place, not tuples. // template instantiations alias TTup = AliasSeq!(int, float, char); MyTemplate!(Tup, TTup.sizeof...)... -> MyTemplate!(0, 4, 4, 1), MyTemplate!(1, 4, 4, 1), MyTemplate!(2, 4, 4, 1) Although we don't have UFCS for templates, this would be a point for that: Tup.MyTemplate!(TTup.sizeof) Otherwise it would suffer from the bottom-up semantic analysis problem I mention further on. I think this only satisfies a narrow set of possibilities. There may be parallel expansion, and UFCS only handles one single UFCS argument, and also requires that it be the first argument. // replace staticMap alias staticMap(alias F, T...) = F!T...; alias staticMap(alias F, T...) = F!T; And if F accepts variadic arguments? template F(Args...) // more controlled expansion, with template arg lists AliasSeq!(10, Tup, 20)... -> ( 10, 0, 20, 10, 1, 20, 10, 2, 20 ) What I don't like about this example is it can't be done with bottom-up semantics, which is what D normally does. It relies on some top-down modification for it, which is likely to cause all sorts of unexpected difficulties. See the UFCS example above, where Tup is moved outside of the argument list. I'd like to know why you consider that troubling? We were able to implement this DIP in a relatively small and completely self-contained block of code. It's totally isolated, and easy to understand. AliasSeq!(10, Tup..., 20) -> ( 10, 0, 1, 2, 20 ) AliasSeq!(10, Tup, 20) -> ( 10, 0, 1, 2, 20 ) This is just normal D, but the first case (above) needs an expression. That said, with respect to these fold expressions, it would be ideal if they applied to arrays equally as I propose to tuples. There's a lot of merit to the idea of arrays and tuples using a common syntax, which is what I'm proposing. I agree. I'm wondering if it might be the case that ... is still a useful operator to make expansion explicit and unambiguous, and my definition is expanded to apply to arrays in addition to tuples, which would maintain that uniformity.  Apr 23 Steven Schveighoffer <schveiguy gmail.com> writes: On 4/24/20 2:45 AM, Manu wrote: AliasSeq!(10, Tup..., 20) -> ( 10, 0, 1, 2, 20 ) AliasSeq!(10, Tup, 20) -> ( 10, 0, 1, 2, 20 ) This is just normal D, but the first case (above) needs an expression. The first is trivial anyway, no expansion needed. This case isn't important for the proposal -Steve  Apr 24 Manu <turkeyman gmail.com> writes: On Fri, Apr 24, 2020 at 4:35 PM Walter Bright via Digitalmars-d < digitalmars-d puremagic.com> wrote: On 4/23/2020 10:51 PM, Manu wrote: Another reason I introduce ... is for static fold. The follow-up to this DIP would make this expression work: Tup + ... -> Tup[0] + Tup[1] + ... + Tup[$-1]

I expect static foreach can handle that. But we can dig a little deeper. D
doesn't have a special syntax to sum the elements of an array, but it can
use a
library function to do it. The next observation is that to sum the
elements of a
tuple, all the tuple members need to be implicitly convertible to a single
arithmetic type. There is a way to do that:

[ Tup ]

No, it's not necessary that they are common types. It's actually the
opposite intent of the expression.
If they wanted what you say, they would write what you say today.

The point of tuple expansions as opposed to using an array is almost
certainly because you DO have mixed types.
If the BinOp is logical, anything that can coerce to true/false is
acceptable. If it's arithmetic, then operator overloads are probably part
of the equation.

Apr 23
Walter Bright <newshound2 digitalmars.com> writes:
On 4/23/2020 11:52 PM, Manu wrote:
On Fri, Apr 24, 2020 at 4:35 PM Walter Bright via Digitalmars-d
<digitalmars-d puremagic.com <mailto:digitalmars-d puremagic.com>> wrote:

On 4/23/2020 10:51 PM, Manu wrote:
> Another reason I introduce ... is for static fold.
> The follow-up to this DIP would make this expression work:
>
>    Tup + ...  ->  Tup[0] + Tup[1] + ... + Tup[$-1] I expect static foreach can handle that. But we can dig a little deeper. D doesn't have a special syntax to sum the elements of an array, but it can use a library function to do it. The next observation is that to sum the elements of a tuple, all the tuple members need to be implicitly convertible to a single arithmetic type. There is a way to do that: [ Tup ] No, it's not necessary that they are common types. It's actually the opposite intent of the expression. The only way: 1 + 'a' + 1.0 + 1L + 2.0L can be computed is if the operands are brought to a common arithmetic type.  Apr 24 Manu <turkeyman gmail.com> writes: On Fri, Apr 24, 2020 at 5:35 PM Walter Bright via Digitalmars-d < digitalmars-d puremagic.com> wrote: On 4/23/2020 11:52 PM, Manu wrote: On Fri, Apr 24, 2020 at 4:35 PM Walter Bright via Digitalmars-d <digitalmars-d puremagic.com <mailto:digitalmars-d puremagic.com>> wrote: On 4/23/2020 10:51 PM, Manu wrote: > Another reason I introduce ... is for static fold. > The follow-up to this DIP would make this expression work: > > Tup + ... -> Tup[0] + Tup[1] + ... + Tup[$-1]

I expect static foreach can handle that. But we can dig a little

deeper. D
doesn't have a special syntax to sum the elements of an array, but

it can use a
library function to do it. The next observation is that to sum the

elements
of a
tuple, all the tuple members need to be implicitly convertible to a

single
arithmetic type. There is a way to do that:

[ Tup ]

No, it's not necessary that they are common types. It's actually the

opposite
intent of the expression.

The only way:

1 + 'a' + 1.0 + 1L + 2.0L

can be computed is if the operands are brought to a common arithmetic type.

It doesn't make much sense to think in terms of primitive types. As I just
said (but you truncated), operator overloads are almost certainly part of
this equation.
I think it's more common to do a fold like this with logical operators
though; && appears 9 times out of 10 in my code.

Apr 24
Walter Bright <newshound2 digitalmars.com> writes:
On 4/24/2020 12:39 AM, Manu wrote:
It doesn't make much sense to think in terms of primitive types. As I just
said
(but you truncated), operator overloads are almost certainly part of this
equation.

I think it's more common to do a fold like this with logical operators though;
&& appears 9 times out of 10 in my code.

[ cast(bool)Tup ]

Apr 24
Manu <turkeyman gmail.com> writes:
On Fri, Apr 24, 2020 at 6:20 PM Walter Bright via Digitalmars-d <
digitalmars-d puremagic.com> wrote:

On 4/24/2020 12:39 AM, Manu wrote:
It doesn't make much sense to think in terms of primitive types. As I

just said
(but you truncated), operator overloads are almost certainly part of

this equation.

This claim doesn't make sense. A custom type can BinOp any adjacent type it
likes.

I think it's more common to do a fold like this with logical operators
though;
&& appears 9 times out of 10 in my code.

[ cast(bool)Tup ]

And then fold them?

Apr 24
Walter Bright <newshound2 digitalmars.com> writes:
On 4/24/2020 1:34 AM, Manu wrote:
On Fri, Apr 24, 2020 at 6:20 PM Walter Bright via Digitalmars-d
<digitalmars-d puremagic.com <mailto:digitalmars-d puremagic.com>> wrote:

On 4/24/2020 12:39 AM, Manu wrote:
> It doesn't make much sense to think in terms of primitive types. As I
just said
> (but you truncated), operator overloads are almost certainly part of
this
equation.

This claim doesn't make sense. A custom type can BinOp any adjacent type it
likes.

I can't see the legitimate use case for:

((t[0] + t[1]) + t[2])

for a tuple if relying on diverse types with operator overloading, because of
the associative ordering dependency.

If anyone relied on that, it would be a major code smell. Adding new syntax for
tuples to support that would be a mistake.

> I think it's more common to do a fold like this with logical operators
though;
> && appears 9 times out of 10 in my code.

[ cast(bool)Tup ]

And then fold them?

Since then it's an array, use the existing array folding methods.

Apr 24
Steven Schveighoffer <schveiguy gmail.com> writes:
On 4/24/20 4:15 PM, Walter Bright wrote:

Since then it's an array, use the existing array folding methods.

This is probably good enough, because we can generate arrays at
compile-time and process them via CTFE.

Some key targets within std.meta are anySatisfy/allSatisfy.

A quick stab at this (I'm going to stick with the ellipsis version as
it's easy to ):

import std.algorithm : canFind;
enum anySatisfy(alias F, T...) = [F!(T)...].canFind(true);
enum allSatisfy(alias F, T...) = ![F!(T)...].canFind(false);

I'm so in love with this feature, when can we get it in?

-Steve

Apr 24
Timon Gehr <timon.gehr gmx.ch> writes:
On 24.04.20 23:00, Steven Schveighoffer wrote:
On 4/24/20 4:15 PM, Walter Bright wrote:

Since then it's an array, use the existing array folding methods.

This is probably good enough, because we can generate arrays at
compile-time and process them via CTFE.

Some key targets within std.meta are anySatisfy/allSatisfy.

A quick stab at this (I'm going to stick with the ellipsis version as
it's easy to ):

import std.algorithm : canFind;
enum anySatisfy(alias F, T...) = [F!(T)...].canFind(true);
enum allSatisfy(alias F, T...) = ![F!(T)...].canFind(false);

I'm so in love with this feature, when can we get it in?

-Steve

This is not equivalent to the current implementations.

import std.meta;

enum Foo(int x)=true;
// true now, but compile error with your approach:
pragma(msg, anySatisfy!(Foo, 1, int));

May 09
Steven Schveighoffer <schveiguy gmail.com> writes:
On 5/9/20 11:28 AM, Timon Gehr wrote:
On 24.04.20 23:00, Steven Schveighoffer wrote:
On 4/24/20 4:15 PM, Walter Bright wrote:

Since then it's an array, use the existing array folding methods.

This is probably good enough, because we can generate arrays at
compile-time and process them via CTFE.

Some key targets within std.meta are anySatisfy/allSatisfy.

A quick stab at this (I'm going to stick with the ellipsis version as
it's easy to ):

import std.algorithm : canFind;
enum anySatisfy(alias F, T...) = [F!(T)...].canFind(true);
enum allSatisfy(alias F, T...) = ![F!(T)...].canFind(false);

I'm so in love with this feature, when can we get it in?

This is not equivalent to the current implementations.

import std.meta;

enum Foo(int x)=true;
// true now, but compile error with your approach:
pragma(msg, anySatisfy!(Foo, 1, int));

I'm not sure I'd call that a "feature" though, or just invalid input:

pragma(msg, anySatisfy!(Foo, int, 1)) => error.

This should provide the best situation I think:

template evalBool(alias F, V...) if (V.length == 1){
static if(is(typeof(F!V) == bool))
enum evalBool = F!V;
else
enum evalBool = false;
}

enum anySatisfy(alias F, T...) = [evalBool!(F, T)...].canFind(true);

-Steve

May 09
Nick Treleaven <nick geany.org> writes:
On Friday, 24 April 2020 at 21:00:08 UTC, Steven Schveighoffer
wrote:
import std.algorithm : canFind;
enum anySatisfy(alias F, T...) = [F!(T)...].canFind(true);
enum allSatisfy(alias F, T...) = ![F!(T)...].canFind(false);

That might be slower than the existing templates (now in
core.internal.traits), which don't use template recursion, and
short circuit:

template anySatisfy(alias F, Ts...)
{
static foreach (T; Ts)
{
static if (!is(typeof(anySatisfy) == bool) && // not yet
defined
F!T)
{
enum anySatisfy = true;
}
}
static if (!is(typeof(anySatisfy) == bool)) // if not yet
defined
{
enum anySatisfy = false;
}
}

Your versions also require an extra import (although canFind
could be locally copied).

May 10
On Sunday, 10 May 2020 at 08:30:35 UTC, Nick Treleaven wrote:
On Friday, 24 April 2020 at 21:00:08 UTC, Steven Schveighoffer
wrote:
import std.algorithm : canFind;
enum anySatisfy(alias F, T...) = [F!(T)...].canFind(true);
enum allSatisfy(alias F, T...) = ![F!(T)...].canFind(false);

That might be slower than the existing templates (now in
core.internal.traits), which don't use template recursion, and
short circuit:

template anySatisfy(alias F, Ts...)
{
static foreach (T; Ts)
{
static if (!is(typeof(anySatisfy) == bool) && // not
yet defined
F!T)
{
enum anySatisfy = true;
}
}
static if (!is(typeof(anySatisfy) == bool)) // if not yet
defined
{
enum anySatisfy = false;
}
}

Your versions also require an extra import (although canFind
could be locally copied).

Here is how this would look as a type function.

bool anySatisfy(bool function (alias) pred, alias[]... types)
{
foreach(type; types)
{
if (pred(type))
return true;
}
return false;
}

May 10
Steven Schveighoffer <schveiguy gmail.com> writes:
On 5/10/20 4:30 AM, Nick Treleaven wrote:
On Friday, 24 April 2020 at 21:00:08 UTC, Steven Schveighoffer wrote:
import std.algorithm : canFind;
enum anySatisfy(alias F, T...) = [F!(T)...].canFind(true);
enum allSatisfy(alias F, T...) = ![F!(T)...].canFind(false);

That might be slower than the existing templates (now in
core.internal.traits), which don't use template recursion, and short
circuit:

template anySatisfy(alias F, Ts...)
{
static foreach (T; Ts)
{
static if (!is(typeof(anySatisfy) == bool) && // not yet
defined
F!T)
{
enum anySatisfy = true;
}
}
static if (!is(typeof(anySatisfy) == bool)) // if not yet defined
{
enum anySatisfy = false;
}
}

I admit, I didn't look at the implementation, I just assumed it was one
of the recursive ones.

But the "performance" is relative to where the satisfying items are. But
it's a good point, evaluating all of the templates to see if one is true
leaves some performance on the table, and another reason to prefer a
folding mechanism. i.e., this might short circuit automatically (if
supported):

enum anySatisfy(alias F, T...) = F!(T) || ...;

Your versions also require an extra import (although canFind could be
locally copied).

Yeah, a for loop CTFE can easily be created for this if needed. It was
just easier to reach for something that exists to show the brevity.

-Steve

May 10
Simen =?UTF-8?B?S2rDpnLDpXM=?= <simen.kjaras gmail.com> writes:
On Friday, 24 April 2020 at 08:15:53 UTC, Walter Bright wrote:
On 4/24/2020 12:39 AM, Manu wrote:
It doesn't make much sense to think in terms of primitive
types. As I just said (but you truncated), operator overloads
are almost certainly part of this equation.

Sure they do - just think of expression templates.

--
Simen

Apr 24
Walter Bright <newshound2 digitalmars.com> writes:
On 4/24/2020 1:54 AM, Simen Kjærås wrote:

Sure they do - just think of expression templates.

We're not adding features to support expression templates with tuples.

ETs were "discovered" in the early aughts, and caused great excitement in the
C++ community. Lots of articles, presentations, and projects were done using
ETs.

Then ... nothing. They just vanished from the conversation. The problem is they
were ugly, clumsy, incomprehensible to debug, incredibly slow to compile, did
not work at scale (blowing up the compilers) and impossible to work with if you
weren't an expert.

They're a failed experiment.

Apr 24
Atila Neves <atila.neves gmail.com> writes:
On Friday, 24 April 2020 at 20:23:09 UTC, Walter Bright wrote:
On 4/24/2020 1:54 AM, Simen Kjærås wrote:

Sure they do - just think of expression templates.

We're not adding features to support expression templates with
tuples.

ETs were "discovered" in the early aughts, and caused great
excitement in the C++ community. Lots of articles,
presentations, and projects were done using ETs.

Then ... nothing. They just vanished from the conversation. The
problem is they were ugly, clumsy, incomprehensible to debug,
incredibly slow to compile, did not work at scale (blowing up
the compilers) and impossible to work with if you weren't an
expert.

They're a failed experiment.

ETs are very much alive in C++. Both Eigen and Boost.Spirit are
popular libraries, and this despite the colossal hit to
compile-times.

Apr 27
jmh530 <john.michael.hall gmail.com> writes:
On Monday, 27 April 2020 at 12:08:11 UTC, Atila Neves wrote:
[snip]

ETs are very much alive in C++. Both Eigen and Boost.Spirit are
popular libraries, and this despite the colossal hit to
compile-times.

Eigen is also a dependency in a number of other libraries. When
people really need extra performance, they will tolerate longer
compile-times until they have a better solution. The error
messages are not fun.

Apr 27
Atila Neves <atila.neves gmail.com> writes:
On Monday, 27 April 2020 at 13:02:03 UTC, jmh530 wrote:
On Monday, 27 April 2020 at 12:08:11 UTC, Atila Neves wrote:
[snip]

ETs are very much alive in C++. Both Eigen and Boost.Spirit
are popular libraries, and this despite the colossal hit to
compile-times.

Eigen is also a dependency in a number of other libraries. When
people really need extra performance, they will tolerate longer
compile-times until they have a better solution. The error
messages are not fun.

I know. Eigen is literally the reason why I can't convince any
friends who work at CERN to try D. My dream would be to have dpp
somehow enable calling it from D, but I'm not sure that'll ever
be possible.

Apr 27
Walter Bright <newshound2 digitalmars.com> writes:
On 4/27/2020 7:41 AM, Atila Neves wrote:
On Monday, 27 April 2020 at 13:02:03 UTC, jmh530 wrote:
On Monday, 27 April 2020 at 12:08:11 UTC, Atila Neves wrote:
[snip]

ETs are very much alive in C++. Both Eigen and Boost.Spirit are popular
libraries, and this despite the colossal hit to compile-times.

Eigen is also a dependency in a number of other libraries. When people really
need extra performance, they will tolerate longer compile-times until they
have a better solution. The error messages are not fun.

I know. Eigen is literally the reason why I can't convince any friends who
work
at CERN to try D. My dream would be to have dpp somehow enable calling it from
D, but I'm not sure that'll ever be possible.

Actually, you can write expression templates in D (I've done it as a demo). But
it is a bit more limited in D because D doesn't allow separate overloads for <
<= > >=, for example.

A far as Boost Spirit goes, more than one person has made a D parser generator
using mixins that is far better.

Apr 27
"H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Mon, Apr 27, 2020 at 03:15:53PM -0700, Walter Bright via Digitalmars-d wrote:
[...]
Actually, you can write expression templates in D (I've done it as a
demo).  But it is a bit more limited in D because D doesn't allow
separate overloads for < <= > >=, for example.

A far as Boost Spirit goes, more than one person has made a D parser
generator using mixins that is far better.

Operator overload abuse of the kind Boost Spirit does makes me cringe.
Not only it uglifies the code, it's also unnatural to write because the
abused built-in operators don't even look like how traditional grammar
operators would be written, making it unnatural to read in addition to
be a pain to write. (And don't even talk about the nastiness of
debugging that mess, with reams of errors each of which occupies
multiple pages of unreadably-long nested template symbols.)

D's recommended approach of using a string mixin or string argument to a
CTFE generator function is *so* much better. You can write normal code
that generates a parser instead of deeply-nested recursive templates,
and you can use natural notation instead of contorting everything to fit
into built-in operators that have been shoehorned into purposes utterly
alien to their original design.

On a larger-picture note, I think that in the long term the way of
progress lies with first-class types manipulated by CTFE functions that
treats them like "runtime" values, that Stefan has been proposing. While
templates are Turing-complete, and in theory can express all possible
compile-time manipulations you might want to do, actually doing so is
like trying to write a GUI application in lambda calculus. It's
*possible* but extremely tedious, error-prone, and just overall painful
to write, maintain, and debug.  (Not to mention consumes gobs of
resources at compile-time and causing insane compiler slowdowns.)  Being
able to manipulate types and arrays of types (tuples, and the rest of
its ilk) will bring D meta-programming into a whole new level that will
outpace every other language capable of meta-programming that I know of.
(Maybe besides Lisp macros. :P)  Couple this with newCTFE, and I think
we have a killer combo.

T

--
Never ascribe to malice that which is adequately explained by incompetence. --
Napoleon Bonaparte

Apr 27
rikki cattermole <rikki cattermole.co.nz> writes:
A while back on IRC we came up with an idea for first class types.

https://gist.github.com/rikkimax/046fb4451e8cbac354ecb292f9a76798#file-first-class-types-md

Basically just an extension of TypeInfo.

Apr 27
On Tuesday, 28 April 2020 at 01:20:50 UTC, rikki cattermole wrote:
A while back on IRC we came up with an idea for first class
types.

https://gist.github.com/rikkimax/046fb4451e8cbac354ecb292f9a76798#file-first-class-types-md

Basically just an extension of TypeInfo.

I was going for a solution which leaves most of the syntax as is.
All that requires is slightly special handling of most tuple
returning __traits, as well as a way to create "assignable"
tuples.
(Which are not actually tuples but rather references to tuples)

Apr 27
Atila Neves <atila.neves gmail.com> writes:
On Monday, 27 April 2020 at 22:15:53 UTC, Walter Bright wrote:
On 4/27/2020 7:41 AM, Atila Neves wrote:
On Monday, 27 April 2020 at 13:02:03 UTC, jmh530 wrote:
On Monday, 27 April 2020 at 12:08:11 UTC, Atila Neves wrote:
[snip]

I know. Eigen is literally the reason why I can't convince any
friends who work at CERN to try D. My dream would be to have
dpp somehow enable calling it from D, but I'm not sure that'll
ever be possible.

Actually, you can write expression templates in D (I've done it
as a demo). But it is a bit more limited in D because D doesn't
allow separate overloads for < <= > >=, for example.

These limitations are one of the reasons why I'm not sure it's
possible to call into Eigen. Another is trying to translate C++
features that don't exist in D such as reference types.

A far as Boost Spirit goes, more than one person has made a D
parser generator using mixins that is far better.

I don't think D is even in the same league here and is obviously
better. My point was that ETs are alive and well in C++.

Apr 28
On Monday, 27 April 2020 at 12:08:11 UTC, Atila Neves wrote:
ETs are very much alive in C++. Both Eigen and Boost.Spirit are
popular libraries, and this despite the colossal hit to
compile-times.

One of the good things about D is that Boost is not in the
ecosystem.
Rather we have phobos which tries to come close sometimes.

Apr 27
Manu <turkeyman gmail.com> writes:
On Fri, Apr 24, 2020 at 4:35 PM Walter Bright via Digitalmars-d <
digitalmars-d puremagic.com> wrote:

There's a lot of merit to the idea of arrays and tuples using a common
syntax,
which is what I'm proposing.

Your idea all falls over anywhere it encounters variadic args, or potential
for overloads though. Without an explicit expression, the only way forward
is to preserve existing D semantics (or it's a radically breaking change),
and that will severely narrow the applicability of this proposal.

Consider this:

alias Tup = AliasSeq!(0, 2, 3);
void fun(int);
fun(Tup);  // scalar argument receives tuple, it can expand, so: fun(0),
fun(1), fun(2)

void fun(int, int, int);

To the previous expression expanded, but now it found an applicable call
and doesn't expand anymore.
Obviously that's far too dangerous to allow, so; functions MAY have
overloads (or just accept var args), and I think that simply means that
expansion can never be applied to function arguments.
That's a HUGE loss, because I do that extremely frequently. Probably more
than anything.

This same principle argument applies to templates which may receive
It weakens the value in my DIP by a huge amount to forego the explicit
syntax designed specifically to handle the resolution of ambiguous
expressions.

This doesn't rule out your proposition that Tup + 10 can (and probably
should) work though. Again, these are not mutually exclusive proposals.

Apr 24
Piotr Mitana <piotr.mitana gmail.com> writes:
My two cents; the idea is nice, however I would consider a bit
different syntax.

1. We already have std.typecons.Tuple.expand, which does the
similar thing. If it is doable with a library solution, it could
be Expand!(TypeTuple).

2. If not, the ... operator is a good choice here, but I would
suggest making it prefix, and not postfix. First, it would nicely
differenciate it from variadic operator. Second, it would be
straightforward and consistent for those using Javascript as
well, as ... is a spread operator there - call(...[a, b, c]) is
equivalent to call(a, b, c).

Such a spread operator in D could work for TypeTuples, value
tuples and static arrays. Simple, conscise and similar with what
one of the most widely used languages does.

Apr 24
Walter Bright <newshound2 digitalmars.com> writes:
On 4/24/2020 12:10 AM, Manu wrote:
Your idea all falls over anywhere it encounters variadic args, or potential
for
overloads though. Without an explicit expression, the only way forward is to
preserve existing D semantics (or it's a radically breaking change), and that
will severely narrow the applicability of this proposal.

Consider this:

alias Tup = AliasSeq!(0, 2, 3);
void fun(int);
fun(Tup);  // scalar argument receives tuple, it can expand, so: fun(0),
fun(1), fun(2)

Write it as:

Tup.fun();

Of course,

fun(1, Tup);

cannot be rewritten in this way, but then a rationale will be necessary as to
why it must be written as func(1, Tup) instead of fun(Tup, 1).

arguments.
It weakens the value in my DIP by a huge amount to forego the explicit syntax
designed specifically to handle the resolution of ambiguous expressions.

Please enumerate examples, along with an explanation of why they are compelling
and why it is unreasonable to do another way. (One such other way would be
using
static foreach.)

This doesn't rule out your proposition that Tup + 10 can (and probably
should)
work though.

At least we agree on something :-)

On a philosophical point, I am generally opposed to adding syntax just to
support some unusual cases. I am sure one could come up with examples where the
... syntax won't work, too. For example, could it work with creating a new
tuple
out of every third tuple member? I'm not looking for a specific answer, but for
a meta answer of "should we support these operations at all with special
syntax"?

My predisposition is towards simple building blocks that can be combined to
derive complex behaviors, not complex building blocks that we try to bash into
simple behaviors. Simple things like applying the existing behaviors of array
operations to tuples. D shouldn't consist of a mass of special rules, but a
smaller set of rules that can be consistently applied across diverse situations.

We did the same thing with named function arguments - apply the existing rules
for named struct initializers. It didn't support every behavior people desired,
but it supported enough that the remainder wasn't particularly significant.

Apr 24
Walter Bright <newshound2 digitalmars.com> writes:
On 4/24/2020 1:03 AM, Walter Bright wrote:
On 4/24/2020 12:10 AM, Manu wrote:
Your idea all falls over anywhere it encounters variadic args, or potential
for overloads though. Without an explicit expression, the only way forward is
to preserve existing D semantics (or it's a radically breaking change), and
that will severely narrow the applicability of this proposal.

Consider this:

alias Tup = AliasSeq!(0, 2, 3);
void fun(int);
fun(Tup);  // scalar argument receives tuple, it can expand, so: fun(0),
fun(1), fun(2)

Write it as:

Tup.fun();

Incidentally, belay that. That will currently produce: fun(0, 2, 3);

Apr 24
Nick Treleaven <nick geany.org> writes:
On Friday, 24 April 2020 at 08:04:29 UTC, Walter Bright wrote:
On 4/24/2020 1:03 AM, Walter Bright wrote:
On 4/24/2020 12:10 AM, Manu wrote:
alias Tup = AliasSeq!(0, 2, 3);
void fun(int);
fun(Tup);  // scalar argument receives tuple, it can
expand, so: fun(0), fun(1), fun(2)

Write it as:

Tup.fun();

Incidentally, belay that. That will currently produce: fun(0,
2, 3);

This syntax is an unfortunate inconsistency with your proposal,
but how often is variadic UFCS used ATM? Its existence has been
pointed out in a NG reply before (I think by Timon), but it
seemed to surprise people. Perhaps it could be deprecated - use
fun(Tup) instead. The latter is more intuitive as people tend to
think UFCS is for the first argument, not multiple arguments. The
spec doesn't seem to suggest variadic UFCS is supported:

https://dlang.org/spec/function.html#pseudo-member

Of course,

fun(1, Tup);

cannot be rewritten in this way

AliasSeq!(1, Tup).fun(); // fun(1); fun(0); fun(2); fun(3);

Apr 24
Walter Bright <newshound2 digitalmars.com> writes:
On 4/24/2020 5:55 AM, Nick Treleaven wrote:
On Friday, 24 April 2020 at 08:04:29 UTC, Walter Bright wrote:
On 4/24/2020 1:03 AM, Walter Bright wrote:
On 4/24/2020 12:10 AM, Manu wrote:
alias Tup = AliasSeq!(0, 2, 3);
void fun(int);
fun(Tup);  // scalar argument receives tuple, it can expand, so: fun(0),
fun(1), fun(2)

Write it as:

Tup.fun();

Incidentally, belay that. That will currently produce: fun(0, 2, 3);

This syntax is an unfortunate inconsistency with your proposal, but how often
is
variadic UFCS used ATM? Its existence has been pointed out in a NG reply
before
(I think by Timon), but it seemed to surprise people. Perhaps it could be
deprecated - use fun(Tup) instead. The latter is more intuitive as people tend
to think UFCS is for the first argument, not multiple arguments.

Whether it's intuitive or not depends on your point of view. For example, if
you
view a tuple as members of a struct, the current behavior makes perfect sense.
(I've actually wanted to make a tuple equivalent to a struct, but the darned
function call ABI got in the way.)

Of course,

fun(1, Tup);

cannot be rewritten in this way

AliasSeq!(1, Tup).fun(); // fun(1); fun(0); fun(2); fun(3);

Are you sure? It does fun(1, 0, 2, 3) when I try it.

Apr 24
Adam D. Ruppe <destructionator gmail.com> writes:
On Friday, 24 April 2020 at 12:55:15 UTC, Nick Treleaven wrote:
This syntax is an unfortunate inconsistency with your proposal,
but how often is variadic UFCS used ATM?

It is fairly important for the string interpolation DIP that's
pending. i"foo".idup would expand to a UFCS tuple call in my
proposal.

I might be able to adapt without it anyway.... but I do think
T.foo() going silently to <foo(T[0]), foo(T[1])> is not ideal.

I can live with the T.foo()... syntax though I'm not crazy

The idea is good though. And cutting out the library thing means
it can be used in more contexts too (no more forcing CTFE due to
passing them as compile-time params!)

Apr 24
Manu <turkeyman gmail.com> writes:
On Fri, Apr 24, 2020 at 11:00 PM Nick Treleaven via Digitalmars-d <
digitalmars-d puremagic.com> wrote:

On Friday, 24 April 2020 at 08:04:29 UTC, Walter Bright wrote:
On 4/24/2020 1:03 AM, Walter Bright wrote:
On 4/24/2020 12:10 AM, Manu wrote:
alias Tup = AliasSeq!(0, 2, 3);
void fun(int);
fun(Tup);  // scalar argument receives tuple, it can
expand, so: fun(0), fun(1), fun(2)

Write it as:

Tup.fun();

Incidentally, belay that. That will currently produce: fun(0,
2, 3);

This syntax is an unfortunate inconsistency with your proposal,
but how often is variadic UFCS used ATM? Its existence has been
pointed out in a NG reply before (I think by Timon), but it
seemed to surprise people. Perhaps it could be deprecated - use
fun(Tup) instead. The latter is more intuitive as people tend to
think UFCS is for the first argument, not multiple arguments. The
spec doesn't seem to suggest variadic UFCS is supported:

https://dlang.org/spec/function.html#pseudo-member

Of course,

fun(1, Tup);

cannot be rewritten in this way

AliasSeq!(1, Tup).fun(); // fun(1); fun(0); fun(2); fun(3);

I pointed out earlier, and I'll point out again, that Walter's proposal can
for hijacking.

int fun(int);
fun(Tup);  ->  fun(Tup[0]), fun(Tup[1]), fun(Tup[2])

This looks intuitive, but then someone adds:

void fun(int, int, int);

Or maybe it was already there...
So, a 2 or >=4 len tuple could expand, but a 3-len tuple would call the
What if the function has variadic args?
Hijacking possibility just wipes this whole thing off the table.

The ambiguities in Walter's suggestion are impossible to reconcile in a
uniform and satisfying way. That's exactly why I moved away from that idea
and towards an explicit expression. It's the only way.

Apr 24
Steven Schveighoffer <schveiguy gmail.com> writes:
On 4/24/20 4:03 AM, Walter Bright wrote:
On a philosophical point, I am generally opposed to adding syntax just
to support some unusual cases. I am sure one could come up with examples
where the .... syntax won't work, too. For example, could it work with
creating a new tuple out of every third tuple member? I'm not looking
for a specific answer, but for a meta answer of "should we support these
operations at all with special syntax"?

The point of this is to reduce the amount of "trivial" templates. Every
time a template is instantiated, it consumes memory in the compiler, it
creates more stuff to manage in the symbol tables, and because templates
have to "be the same" for every instantiation, you have weird paradoxes.

One of the fundamental problems with template power in D is that in
order to operate on lists, you need a functional-style recurse or divide
and conquer, which multiplies the number of templates needed by possibly
hundreds or thousands. This takes time, resources, and can result in
things that don't compile or are extremely slow to compile.

What I think this proposal does is to remove an entire class of
recursive templates, and replaces them with simple loops that the
compiler can execute blindfolded with 4 cores tied behind its back with
ease, faster than the template expansions required by e.g. staticMap.

I think the ellipsis version is superior simply because it has more
expressive power (see my posts elsewhere). An ideal proposal would allow
all things to be handled within one expression, but the ellipsis is
better in that it is unambiguous and does not break code.

And actually, with a staticIota-like thing you could do every third
tuple member quite easily.

alias everyThird = staticIota!(2, Tup.length, 3); // start, end, step

alias everyThirdMember = (Tup...[everyThird])...;

staticIota is kind of another primitive that is super-useful, and would
be easy for the compiler to provide. But could be done via CTFE as well.

-Steve

Apr 24
Meta <jared771 gmail.com> writes:
On Friday, 24 April 2020 at 17:10:16 UTC, Steven Schveighoffer
wrote:
staticIota is kind of another primitive that is super-useful,
and would be easy for the compiler to provide. But could be
done via CTFE as well.

-Steve

"staticIota" *should* be a primitive, and D even supports this
functionality in exactly 1 place:

foreach (i; 0..10) { do something }

I wish this was an actual language construct so it could be used
in other places, and not just some hard coded syntax sugar.

Apr 24
Walter Bright <newshound2 digitalmars.com> writes:
On 4/24/2020 10:10 AM, Steven Schveighoffer wrote:
The point of this is to reduce the amount of "trivial" templates. Every time a
template is instantiated, it consumes memory in the compiler, it creates more
stuff to manage in the symbol tables, and because templates have to "be the
same" for every instantiation, you have weird paradoxes.

Surprisingly, I do get this!

One of the fundamental problems with template power in D is that in order to
operate on lists, you need a functional-style recurse or divide and conquer,
which multiplies the number of templates needed by possibly hundreds or
thousands. This takes time, resources, and can result in things that don't
compile or are extremely slow to compile.

What I think this proposal does is to remove an entire class of recursive
templates, and replaces them with simple loops that the compiler can execute
blindfolded with 4 cores tied behind its back with ease, faster than the
template expansions required by e.g. staticMap.

No argument there.

I think the ellipsis version is superior simply because it has more expressive
power (see my posts elsewhere). An ideal proposal would allow all things to be
handled within one expression, but the ellipsis is better in that it is
unambiguous and does not break code.

My issue is finding the best way to do this. Adding powerful syntax "just
because we can" leads to an unusable language. Queue my opposition to AST
macros.

And actually, with a staticIota-like thing you could do every third tuple
member
quite easily.

alias everyThird = staticIota!(2, Tup.length, 3); // start, end, step

alias everyThirdMember = (Tup...[everyThird])...;

Touche. Well done.

Let's not fall into the mode of only looking at the way C++ did it and not
seeing other ways. C++ has problems (like not having arrays) that lead it in
different directions for solving array-like problems.

What do other languages do? How are things like this expressed in mathematics?

staticIota is kind of another primitive that is super-useful, and would be
easy
for the compiler to provide.

Where does one stop in adding operators to the language?

Another approach to resolving the original problem (template instantiation
bloat) is for the compiler to recognize templates like AliasSeq as "special"
and
implement them directly.

For example, AliasSeq is defined as:

template AliasSeq(TList...) { alias AliasSeq = TList; }

This pattern is so simple it can be recognized by the compiler. (Having
std.meta.AliasSeq being a special name known to the compiler is not necessary.)
The compiler already recognizes certain patterns, such as the expression that
does a rotate, and then generates the CPU rotate instruction for it.

Apr 24
Steven Schveighoffer <schveiguy gmail.com> writes:
On 4/24/20 4:55 PM, Walter Bright wrote:
On 4/24/2020 10:10 AM, Steven Schveighoffer wrote:
I think the ellipsis version is superior simply because it has more
expressive power (see my posts elsewhere). An ideal proposal would
allow all things to be handled within one expression, but the ellipsis
is better in that it is unambiguous and does not break code.

My issue is finding the best way to do this. Adding powerful syntax
"just because we can" leads to an unusable language. Queue my opposition
to AST macros.

It's not just the expressive power, but the simple expressiveness of the
array syntax is missing. There are too many ambiguities without adding
syntax because expressions that use tuples can already work with them.

The simplest is "make me an array of all these values"

[tup * 2]

But I don't see how the compiler knows which parts of the expression are
part of the expansion, and which parts aren't. I would fully expect the
above to result in:

([tup[0] * 2], [tup[1] * 2], ...)

And if it doesn't, then the compiler is making odd arbitrary decisions
as to which part of the expression is expandable.

Which means, essentially, the odd bizarre cases are doable, but the
useful ones are not.

The ellipsis doesn't do much in terms of functionality, but it gives you
a place to tag "this is what I want you to expand", so the compiler
makes the right decisions.

And actually, with a staticIota-like thing you could do every third
tuple member quite easily.

alias everyThird = staticIota!(2, Tup.length, 3); // start, end, step

alias everyThirdMember = (Tup...[everyThird])...;

Touche. Well done.

Let's not fall into the mode of only looking at the way C++ did it and
not seeing other ways. C++ has problems (like not having arrays) that
lead it in different directions for solving array-like problems.

I'm fully on board with breaking with C++ when it doesn't make sense. In
the case of the array syntax, I think it's just not workable.

What do other languages do? How are things like this expressed in
mathematics?

Math is a great inspiration: https://en.wikipedia.org/wiki/Sequence

Lots of positional notation, but you can see it uses an ellipsis to
denote "continues on".

The question is, how can we denote "apply this expression to every
element using these tuples". That's the goal. I don't know how we can do
it with any more brevity or readability than adding a specific operator
for it or adding a symbol for it.

staticIota is kind of another primitive that is super-useful, and
would be easy for the compiler to provide.

Where does one stop in adding operators to the language?

You can do staticIota without language help. Just like you can do ALL of
this without language help. The point where you stop is when it doesn't
give you a 10-50x speedup in compilation and 50% reduction in memory by
letting the compiler take care of it.

Note that I'm not saying we can't do staticIota in library code. But
really, "count from 0 to N" is such a simple thing for a computer to do,
it seems really wasteful to have to do it via symbol tables and CTFE.

Another approach to resolving the original problem (template
instantiation bloat) is for the compiler to recognize templates like
AliasSeq as "special" and implement them directly.

For example, AliasSeq is defined as:

template AliasSeq(TList...) { alias AliasSeq = TList; }

This pattern is so simple it can be recognized by the compiler. (Having
std.meta.AliasSeq being a special name known to the compiler is not
necessary.)
The compiler already recognizes certain patterns, such as the expression
that does a rotate, and then generates the CPU rotate instruction for it.

Not arguing with that. The faster D can get, the better, regardless of
whether we add any other syntax features.

One thing I would say is that AliasSeq is going to be identical for
every instantiation, so there is no reason to store it in the symbol
table. Just generate it every time, and then you cut down on a ton of
template memory usage. Same thing could be said for a staticIota.

-Steve

Apr 24
On Friday, 24 April 2020 at 20:55:19 UTC, Walter Bright wrote:
Another approach to resolving the original problem (template
instantiation bloat) is for the compiler to recognize templates
like AliasSeq as "special" and implement them directly.

For example, AliasSeq is defined as:

template AliasSeq(TList...) { alias AliasSeq = TList; }

This pattern is so simple it can be recognized by the compiler.
(Having std.meta.AliasSeq being a special name known to the
compiler is not necessary.)
The compiler already recognizes certain patterns, such as the
expression that does a rotate, and then generates the CPU
rotate instruction for it.

AliasSeq itself is not the problem at all.
The problem is when operating on AliasSeq you need to instantiate
new templates to keep them in a "TypeExp" state. Where they can
still be manipulated as lists.
You detect operations you might do on type lists and implement
them as special cases in the compiler, but then you end of with
many specialzed cases which have to treated diffrently.
You push complexity from the langauge into the compiler
implementation details.
And that's where it shouldn't be because who is going to verify
that.
People here who are familiar with the c++11 syntax have already
found bugs.
That won't be possible if it's an unspecified set of rewrites
that take place in an unspecified stage.

Apr 24
Walter Bright <newshound2 digitalmars.com> writes:
On 4/24/2020 1:55 PM, Walter Bright wrote:
How are things like this expressed in mathematics?

I did a little research:

∀xP(x)

means:

for all x in P(x)

or in ... notation:

P(x...)

Apr 24
Walter Bright <newshound2 digitalmars.com> writes:
On 4/24/2020 3:21 PM, Walter Bright wrote:
I did a little research:

Actually, Andrei did.

Apr 24
Adam D. Ruppe <destructionator gmail.com> writes:
On Friday, 24 April 2020 at 22:21:31 UTC, Walter Bright wrote:
I did a little research:

∀xP(x)

means:

for all x in P(x)

or in ... notation:

P(x...)

Shouldn't it actually be

P(x)...

or am i misunderstanidng?

Apr 24
Steven Schveighoffer <schveiguy gmail.com> writes:
On 4/24/20 7:07 PM, Adam D. Ruppe wrote:
On Friday, 24 April 2020 at 22:21:31 UTC, Walter Bright wrote:
I did a little research:

∀xP(x)

means:

for all x in P(x)

First, please can we avoid untypeable symbols?

Second, this really is a "function" of the expression, not the tuple. A
tuple already means "all it's elements". What we want to say to the
compiler is to expand the expression for each of the tuple combinations
included.

It's very much like array vector expressions. However, array vector
expressions work across a whole statement. This cannot be that, it has
to work within an expression, because you can't declare an intermediate
expression like a parameter tuple. If you want to use vector expressions
inside a larger expression, you can put them in a temporary first.

Is there really an issue with ... so much that we need to search for
something else? Or are you complaining about the feature itself? I think

or in ... notation:

P(x...)

Shouldn't it actually be

P(x)...

Yes.

-Steve

Apr 24
Adam D. Ruppe <destructionator gmail.com> writes:
On Saturday, 25 April 2020 at 00:57:54 UTC, Steven Schveighoffer
wrote:
First, please can we avoid untypeable symbols?

I'm not for the symbol! I just want to verify understanding.

I'm not crazy about ... either but it does seem to be the best
option available.

I plan on writing up a couple examples later but my brain isn't
on point right now.

Apr 24
Steven Schveighoffer <schveiguy gmail.com> writes:
On 4/24/20 9:05 PM, Adam D. Ruppe wrote:
On Saturday, 25 April 2020 at 00:57:54 UTC, Steven Schveighoffer wrote:
First, please can we avoid untypeable symbols?

I'm not for the symbol! I just want to verify understanding.

I'm not crazy about ... either but it does seem to be the best option
available.

I plan on writing up a couple examples later but my brain isn't on point
right now.

Sorry, that was a double response, both to you and Walter.

-Steve

Apr 24
Timon Gehr <timon.gehr gmx.ch> writes:
On 25.04.20 02:57, Steven Schveighoffer wrote:
untypeable symbols

Pet peeve. There's no such thing. Just configure your computer so you
can type them.

May 09
rikki cattermole <rikki cattermole.co.nz> writes:
On 10/05/2020 3:46 AM, Timon Gehr wrote:
On 25.04.20 02:57, Steven Schveighoffer wrote:
untypeable symbols

Pet peeve. There's no such thing. Just configure your computer so you
can type them.

ų = ⎄ , u

Compose ♥, I mean anything is type-able if you care enough.

May 09
Joseph Rushton Wakeling <joseph.wakeling webdrake.net> writes:
On Friday, 24 April 2020 at 22:21:31 UTC, Walter Bright wrote:
On 4/24/2020 1:55 PM, Walter Bright wrote:
How are things like this expressed in mathematics?

I did a little research:

∀xP(x)

means:

for all x in P(x)

Minor note: I don't think it means for all x _in_ P(x).  Rather,
here, we're evaluating P(x) for all values of x.

If P(x) is a propositional statement (i.e. a statement about x
that can be true or false), this means that ∀xP(x) represents
whether P(x) is true for all x or not.

I don't recall ever seeing this notation used other than to
combine propositional statements in this way.

If you want to talk about all values of a function f(x) for all
values of x in some set X, then it would be an _image_ of X under
f:

f[X] = { f(x) | x ∈ X }

... so if you want to talk about all elements of a given set, it
would be as simple as:

{ x | x ∈ X }

which is the source of the "list comprehension" notations you get
in various languages (e.g. pythonic x for x in X, but also of
course f(x) for x in X).  See:
https://en.wikipedia.org/wiki/Set-builder_notation#Parallels_in_programming_languages

Apr 25
Walter Bright <newshound2 digitalmars.com> writes:
On 4/25/2020 2:09 AM, Joseph Rushton Wakeling wrote:
[...]

Thank you. This is good stuff!

Apr 26
Johannes Loher <johannes.loher fg4f.de> writes:
On Friday, 24 April 2020 at 22:21:31 UTC, Walter Bright wrote:
On 4/24/2020 1:55 PM, Walter Bright wrote:
How are things like this expressed in mathematics?

I did a little research:

∀xP(x)

means:

for all x in P(x)

or in ... notation:

P(x...)

This notation describes a proposition as in propositional
calculus. It is not something one would you to describe a „static
map“ of tuples.

In order to use the typical mathematical notation, we first need
to understand what a tuple is in mathematics. Typically this
terminology is used for an element of a product of specific
objects. For example, let’s consider \R^2 (2 dimensional vector
space with real-valued entries), which is the product of \R and
\R. The elements of \R^2 can be called 2-tuples. You can write it
as (x_1, x_2) \in \R^2. Now the number of factors can be
something different than 2 of course. In fact it doesn't need to
be finite, but as D only supports finite tuples (as of now),
let's only consider a finite number of factors for now. Also the
factors do not need to be the same. So let n \in \N (i.e. n is a
Natural number), then consider the finite set M:= /{1, ..., n/}.
For each m \in M, let A_m some object (for understanding, you can
consider it to be a set, or a vector space, but it can be a lot
of different things). Then the product of those A_m is written as
\prod_{m \in M} A_m. An element of this product can be called an
n-tuple (there are n entries) and is written as (a_m)_{m \in M}.
Now suppose that we have a map f, that can be applied to each of
the factors of the product and we want to create a new tuple by
applying f to each of its entries. We would write the new tuple
as (f(a_m))_{m \in M}. As you can see, there is no „expansion“ or
anything, it is just a declarative description of how the tuple
looks after „mapping“.

If you want a similar notation that’s „common“ in programming
languages, (list) comprehensions is what is closest. E.g. in
python, you can do the following:

my_tuple = (1, 2, 3, 4)
my_mapped_tuple = (f(i) for i in my_tuple)

Apr 25
Walter Bright <newshound2 digitalmars.com> writes:
On 4/25/2020 3:08 AM, Johannes Loher wrote:
[...]

Thank you!

Apr 26
Timon Gehr <timon.gehr gmx.ch> writes:
On 25.04.20 00:21, Walter Bright wrote:
On 4/24/2020 1:55 PM, Walter Bright wrote:
How are things like this expressed in mathematics?

I did a little research:

∀xP(x)

means:

for all x in P(x)

or in ... notation:

P(x...)

Actually, ∀x. P(x) is closer to the meaning of P(x) && ... with Manu's
notation.

P(x)... could be something like:

expand x in P(x).

May 09
Manu <turkeyman gmail.com> writes:
On Sat, Apr 25, 2020 at 7:00 AM Walter Bright via Digitalmars-d <
digitalmars-d puremagic.com> wrote:

Let's not fall into the mode of only looking at the way C++ did it and not
seeing other ways. C++ has problems (like not having arrays) that lead it
in
different directions for solving array-like problems.

It is so predictable that you will eventually produce a sentence like this
whenever I suggest anything.
To be clear; **I started with your proposal** when thinking on this. I
quickly determined it was unworkable.
The fact that I chose the same syntax as C++ is for a few reasons:
1. we also use ... to build tuples, it seems natural to invoke the same
syntax here
2. it turns out that it's grammatically unambiguous!
3. it's familiar and unsurprising, and there's no compelling reason to

C++ (which would be the only argument against as far as I can tell)

It's also not a bad idea to look to acclaimed success stories for
inspiration, and I think ... is probably C++'s biggest success story this
century.
It is not awkward or troubling in C++. It's an unusually excellent piece of
language.

With respect to D though, we gain so much more than C++ can. C++ doesn't
have tuples, but we do!
It applies with a beautiful uniformity, and it's useful in a lot more
situations and with a lot less boilerplate cruft than C++ could ever dream.
This will make junk template instantiations a thing of the past, and it's

I can't imagine a reason to change my proposal. The only reason I would
consider changing it is if you can make your (like-arrays) proposal work...
but it doesn't. I explored that a lot, believe it or not. I really wanted
to believe it could work.

What do other languages do? How are things like this expressed in
mathematics?

C++ uses ..., and they are the only language that has anything remotely
like this.
Javascript also uses ... for something similar-ish, so the web guys
should find it familiar too.

Apr 24
Walter Bright <newshound2 digitalmars.com> writes:
On 4/24/2020 6:31 PM, Manu wrote:
On Sat, Apr 25, 2020 at 7:00 AM Walter Bright via Digitalmars-d
<digitalmars-d puremagic.com <mailto:digitalmars-d puremagic.com>> wrote:

Let's not fall into the mode of only looking at the way C++ did it and not
seeing other ways. C++ has problems (like not having arrays) that lead it
in
different directions for solving array-like problems.

It is so predictable that you will eventually produce a sentence like this
whenever I suggest anything.

I could have phrased that better.

I can't imagine a reason to change my proposal. The only reason I would
consider
changing it is if you can make your (like-arrays) proposal work... but it
doesn't. I explored that a lot, believe it or not. I really wanted to believe
it
could work.

I want to make sure it won't work, too, before abandoning it. I can understand
finding that frustrating :-)

What do other languages do? How are things like this expressed in
mathematics?

C++ uses ..., and they are the only language that has anything remotely like
this.
Javascript also uses ... for something similar-ish, so the web guys should
find it familiar too.

I don't remember Javascript doing that, though it's been 20 years since I
worked
on the JS compiler. Maybe it's a later addition.

Apr 26
Sebastiaan Koppe <mail skoppe.eu> writes:
On Sunday, 26 April 2020 at 10:20:49 UTC, Walter Bright wrote:
On 4/24/2020 6:31 PM, Manu wrote:
C++ uses ..., and they are the only language that has
anything remotely like this.
Javascript also uses ... for something similar-ish, so the
web guys should find it familiar too.

I don't remember Javascript doing that, though it's been 20
years since I worked on the JS compiler. Maybe it's a later

It is a recent addition, ES6 I believe.

They use it in destructuring:


const {a, b, rest...} = {a:1, b:2, c:3, d:4}; // rest == {c:3,
d:4}


and in expanding arrays


function foo(a,b) {}
const ab = [1,2];
foo(...ab);


or objects


const ab = {a: 1, b: 2};
const cd = {...ab, c:3, d:4}; // cd == {a:1, b:2, c:3, d:4}



Apr 26
Fynn =?UTF-8?B?U2NocsO2ZGVy?= <fynnos live.com> writes:
On Sunday, 26 April 2020 at 11:13:25 UTC, Sebastiaan Koppe wrote:
On Sunday, 26 April 2020 at 10:20:49 UTC, Walter Bright wrote:
On 4/24/2020 6:31 PM, Manu wrote:
Javascript also uses ... for something similar-ish, so the
web guys should find it familiar too.

I don't remember Javascript doing that, though it's been 20
years since I worked on the JS compiler. Maybe it's a later

It is a recent addition, ES6 I believe.

They use it in destructuring:


const {a, b, rest...} = {a:1, b:2, c:3, d:4}; // rest == {c:3,
d:4}


and in expanding arrays


function foo(a,b) {}
const ab = [1,2];
foo(...ab);

snip

JavaScript uses ... as the spread operator. It expands an
expression in places where multiple arguments are expected. Since
JavaScript does not have tuples, arrays are used instead.

D's equivalent for JS ... operator on function parameters would
be Tuple.expand

Apr 26
Walter Bright <newshound2 digitalmars.com> writes:
On 4/24/2020 1:55 PM, Walter Bright wrote:
Another approach to resolving the original problem (template instantiation
bloat) is for the compiler to recognize templates like AliasSeq as "special"
and
implement them directly.

For example, AliasSeq is defined as:

template AliasSeq(TList...) { alias AliasSeq = TList; }

This pattern is so simple it can be recognized by the compiler. (Having
std.meta.AliasSeq being a special name known to the compiler is not necessary.)

Giving this a try:

https://github.com/dlang/dmd/pull/11057

Apr 25
user1234 <user1234 12.de> writes:
On Saturday, 25 April 2020 at 09:33:19 UTC, Walter Bright wrote:
On 4/24/2020 1:55 PM, Walter Bright wrote:
Another approach to resolving the original problem (template
instantiation bloat) is for the compiler to recognize
templates like AliasSeq as "special" and implement them
directly.

For example, AliasSeq is defined as:

template AliasSeq(TList...) { alias AliasSeq = TList; }

This pattern is so simple it can be recognized by the
compiler. (Having std.meta.AliasSeq being a special name known
to the compiler is not necessary.)

Giving this a try:

https://github.com/dlang/dmd/pull/11057

Nice to see the idea applied. Can you give numbers, i.e from
benchmarks ?

Apr 25
On Saturday, 25 April 2020 at 09:35:45 UTC, user1234 wrote:
On Saturday, 25 April 2020 at 09:33:19 UTC, Walter Bright wrote:
On 4/24/2020 1:55 PM, Walter Bright wrote:
Another approach to resolving the original problem (template
instantiation bloat) is for the compiler to recognize
templates like AliasSeq as "special" and implement them
directly.

For example, AliasSeq is defined as:

template AliasSeq(TList...) { alias AliasSeq = TList; }

This pattern is so simple it can be recognized by the
compiler. (Having std.meta.AliasSeq being a special name
known to the compiler is not necessary.)

Giving this a try:

https://github.com/dlang/dmd/pull/11057

Nice to see the idea applied. Can you give numbers, i.e from
benchmarks ?

this is for the following code:

---
{
enum Add3 = y + 3;
}

AliasSeq!(0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15,
16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31,
32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47,
48, 49, 50, 51, 52, 53, 54, 55, 56, 57, /*goes towards 4096*/);
// this line was abbreviated.
);
pragma(msg, x.length + x[$-1]); dmd fresh from walters branch release build with ldmd(ldc) 1.20 time: 0m0.230s time for a fresh dmd build under the same condtions with that patch reverted: 0m0.265s time for doing it with our "..." patch: 0.030s these numbers are just a best out of 3 messurement so please only see them as a rough guide.  Apr 25 Walter Bright <newshound2 digitalmars.com> writes: On 4/25/2020 3:06 AM, Stefan Koch wrote: template Add3(alias y) { enum Add3 = y + 3; } enum x = staticMap!(Add3, AliasSeq!(0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, /*goes towards 4096*/); // this line was abbreviated. ); pragma(msg, x.length + x[$-1]);

dmd fresh from walters branch release build with ldmd(ldc) 1.20
time:
0m0.230s
time for a fresh dmd build under the same condtions with that patch reverted:
0m0.265s
time for doing it with our "..." patch:
0.030s

these numbers are just a best out of 3 messurement so please only see them as
a
rough guide.

Thank you. Though I don't see how the ... thing would speed up AliasSeq, it
would speed up the staticMap.

Apr 25
On Saturday, 25 April 2020 at 09:35:45 UTC, user1234 wrote:
On Saturday, 25 April 2020 at 09:33:19 UTC, Walter Bright wrote:
On 4/24/2020 1:55 PM, Walter Bright wrote:
Another approach to resolving the original problem (template
instantiation bloat) is for the compiler to recognize
templates like AliasSeq as "special" and implement them
directly.

For example, AliasSeq is defined as:

template AliasSeq(TList...) { alias AliasSeq = TList; }

This pattern is so simple it can be recognized by the
compiler. (Having std.meta.AliasSeq being a special name
known to the compiler is not necessary.)

Giving this a try:

https://github.com/dlang/dmd/pull/11057

Nice to see the idea applied. Can you give numbers, i.e from
benchmarks ?

this is for the following code:

---
import std.meta;
alias big_seq = AliasSeq!(0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11,
12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27,
28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43,
44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, /*goes
towards 4096*/); // this line was abbreviated.
version(dotdotdot)
{
enum x = (big_seq + 3)...;
}
else
{
{
enum Add3 = y + 3;
}

}

pragma(msg, x.length + x[$-1]); dmd fresh from walters branch release build with ldmd(ldc) 1.20 time: 0m0.230s time for a fresh dmd build under the same conditions with that patch reverted: 0m0.270s time for doing it with our "..." patch: 0m0.030s these numbers are just a best out of 3 measurement so please only see them as a rough guide.  Apr 25 Stefan Koch <uplink.coder googlemail.com> writes: On Saturday, 25 April 2020 at 10:11:08 UTC, Stefan Koch wrote: On Saturday, 25 April 2020 at 09:35:45 UTC, user1234 wrote: On Saturday, 25 April 2020 at 09:33:19 UTC, Walter Bright wrote: On 4/24/2020 1:55 PM, Walter Bright wrote: Another approach to resolving the original problem (template instantiation bloat) is for the compiler to recognize templates like AliasSeq as "special" and implement them directly. For example, AliasSeq is defined as: template AliasSeq(TList...) { alias AliasSeq = TList; } This pattern is so simple it can be recognized by the compiler. (Having std.meta.AliasSeq being a special name known to the compiler is not necessary.) Giving this a try: https://github.com/dlang/dmd/pull/11057 Nice to see the idea applied. Can you give numbers, i.e from benchmarks ? this is for the following code: --- import std.meta; alias big_seq = AliasSeq!(0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, /*goes towards 4096*/); // this line was abbreviated. version(dotdotdot) { enum x = (big_seq + 3)...; } else { template Add3(alias y) { enum Add3 = y + 3; } enum x = staticMap!(Add3, big_seq); } pragma(msg, x.length + x[$-1]);

dmd fresh from walters branch release build with ldmd(ldc) 1.20
time:
0m0.230s
time for a fresh dmd build under the same conditions with that
patch reverted:
0m0.270s
time for doing it with our "..." patch:
0m0.030s

these numbers are just a best out of 3 measurement so please
only see them as a rough guide.

I misspoke.
I had the tests in two different files at first and they were
doing diffrent things.
When ... has to actually create the tuple our time is 0m0.210s

Let me post a version which is slightly more careful.

import std.meta;

import big_alias_seq;

// it's an aliasSeq containing the integers from 0 to 4095
inclusive)

version(dotdotdot)
{
mixin("enum x = (big_seq + 3)....length;");
}
else
{
{
enum Add3 = y + 3;
}

}

pragma(msg, x);

Walters patch: 0m0.169s
Without Walters Patch: 0m0.177s
DotDotDot version: 0m0.024s

output for all 3 versions is:
4096LU

Apr 25
Joseph Rushton Wakeling <joseph.wakeling webdrake.net> writes:
On Saturday, 25 April 2020 at 10:27:38 UTC, Stefan Koch wrote:
mixin("enum x = (big_seq + 3)....length;");

Aaargh :-\

I'm not anti the ... operator per se but seeing .... in there
(and yes, I understand what's happening!) is not very friendly to

Apr 25
On Saturday, 25 April 2020 at 10:39:24 UTC, Joseph Rushton
Wakeling wrote:
On Saturday, 25 April 2020 at 10:27:38 UTC, Stefan Koch wrote:
mixin("enum x = (big_seq + 3)....length;");

Aaargh :-\

I'm not anti the ... operator per se but seeing .... in
there (and yes, I understand what's happening!) is not very

I just wanted to show off that the parser accepts this properly.

Apr 25
Walter Bright <newshound2 digitalmars.com> writes:
On 4/25/2020 3:40 AM, Stefan Koch wrote:
I just wanted to show off that the parser accepts this properly.

I suspect it would be best for the lexer to reject it.

Apr 25
Manu <turkeyman gmail.com> writes:
On Sat, Apr 25, 2020 at 8:40 PM Joseph Rushton Wakeling via Digitalmars-d <
digitalmars-d puremagic.com> wrote:

On Saturday, 25 April 2020 at 10:27:38 UTC, Stefan Koch wrote:
mixin("enum x = (big_seq + 3)....length;");

Aaargh :-\

I'm not anti the ... operator per se but seeing .... in there
(and yes, I understand what's happening!) is not very friendly to

It did occur to me to require parens in this case... just because :P

Apr 25
Joseph Rushton Wakeling <joseph.wakeling webdrake.net> writes:
On Saturday, 25 April 2020 at 10:27:38 UTC, Stefan Koch wrote:
version(dotdotdot)
{
mixin("enum x = (big_seq + 3)....length;");
}

I have to say I don't find this notation intuitive for meaning.
big_seq + 3 would imply that we're adding 3 to big_seq
itself.  But without the surrounding ()... big_seq + 3 is not a
valid expression.

This is a case where the pythonic-style x + 3 for x in big_seq
is, while more verbose, probably also a lot clearer.  It makes
the difference between sequence and element much easier to see.

Compare with say (big_seq_1 + big_seq_2)....  As we discussed
off the message board, assuming the sequences are of the same
length, this would result in the elementwise sum.  In other words

x1 + x2 for (x1, x2) in zip(big_seq_1, big_seq_2)

... but where the latter notation (though verbose) is pretty
clear in what it means, the former is ambiguous because we have
to know that big_seq_1 and big_seq_2 are both sequences.  In
this case I used a clear naming scheme, but if in general I see:

(a + b)...

then I have to know what both a and b are in order to
understand clearly what will happen here.

If on the other hand I see:

a + x for x in b

or

x + b for x in a

or

x1 + x2 for (x1, x2) in zip(a, b)

these are all much clearer in intent.

There are obviously other ways in which the ... operator _is_
intuitive and maps well to some existing uses of similar
notation.  But it would be nice if we could reduce ambiguity of
intent.

Apr 25
Walter Bright <newshound2 digitalmars.com> writes:
On 4/25/2020 3:27 AM, Stefan Koch wrote:
{
enum Add3 = y + 3;
}

Can also do:

enum x = staticMap!("a + 3", big_seq).length;

or:

enum x = staticMap!(y => y + 3, big_seq).length;

which looks a little nicer.

Apr 25
John Colvin <john.loughran.colvin gmail.com> writes:
On Saturday, 25 April 2020 at 21:33:35 UTC, Walter Bright wrote:
On 4/25/2020 3:27 AM, Stefan Koch wrote:
{
enum Add3 = y + 3;
}

Can also do:

enum x = staticMap!("a + 3", big_seq).length;

or:

enum x = staticMap!(y => y + 3, big_seq).length;

which looks a little nicer.

You most certainly can't right now.

Apr 26
user1234 <user1234 12.de> writes:
On Saturday, 25 April 2020 at 10:11:08 UTC, Stefan Koch wrote:
On Saturday, 25 April 2020 at 09:35:45 UTC, user1234 wrote:
[...]

this is for the following code:

---
import std.meta;
alias big_seq = AliasSeq!(0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11,
12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27,
28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43,
44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, /*goes
towards 4096*/); // this line was abbreviated.
version(dotdotdot)
{
enum x = (big_seq + 3)...;
}
else
{
{
enum Add3 = y + 3;
}

}

pragma(msg, x.length + x[$-1]); dmd fresh from walters branch release build with ldmd(ldc) 1.20 time: 0m0.230s time for a fresh dmd build under the same conditions with that patch reverted: 0m0.270s time for doing it with our "..." patch: 0m0.030s these numbers are just a best out of 3 measurement so please only see them as a rough guide. ok, thanks. But i'd like to say that Walter's PR doesnt interfere with the project presented in this topic. Maybe we can get Walter improvment quickly merged and then later yours as well, as it is more complex.  Apr 25 FeepingCreature <feepingcreature gmail.com> writes: On Saturday, 25 April 2020 at 10:11:08 UTC, Stefan Koch wrote: ... dmd fresh from walters branch release build with ldmd(ldc) 1.20 time: 0m0.230s time for a fresh dmd build under the same conditions with that patch reverted: 0m0.270s time for doing it with our "..." patch: 0m0.030s these numbers are just a best out of 3 measurement so please only see them as a rough guide. Sidenote: may I recommend the excellent tool multitime (/usr/bin/time with multiple passes): https://tratt.net/laurie/src/multitime/ , which gives you mean, median and standard deviation, automated multiple runs, and even interleaving of different commands to account for system load.  Apr 27 Stefan Koch <uplink.coder googlemail.com> writes: On Saturday, 25 April 2020 at 09:33:19 UTC, Walter Bright wrote: On 4/24/2020 1:55 PM, Walter Bright wrote: Another approach to resolving the original problem (template instantiation bloat) is for the compiler to recognize templates like AliasSeq as "special" and implement them directly. For example, AliasSeq is defined as: template AliasSeq(TList...) { alias AliasSeq = TList; } This pattern is so simple it can be recognized by the compiler. (Having std.meta.AliasSeq being a special name known to the compiler is not necessary.) Giving this a try: https://github.com/dlang/dmd/pull/11057 This is supposed to make using staticMap cheap?  Apr 25 Walter Bright <newshound2 digitalmars.com> writes: On 4/25/2020 2:49 AM, Stefan Koch wrote: This is supposed to make using staticMap cheap? No. It's to make AliasSeq cheap, to remove motivation for making a special syntactic construct for creating a tuple. Your benchmark shows indeed it has that effect, though you didn't test a version that had both your ... and my AliasSeq folded in. That result would be interesting. There may be some other common template patterns amenable to compiler shortcuts.  Apr 25 Stefan Koch <uplink.coder googlemail.com> writes: On Saturday, 25 April 2020 at 10:27:19 UTC, Walter Bright wrote: On 4/25/2020 2:49 AM, Stefan Koch wrote: This is supposed to make using staticMap cheap? No. It's to make AliasSeq cheap, to remove motivation for making a special syntactic construct for creating a tuple. Your benchmark shows indeed it has that effect, though you didn't test a version that had both your ... and my AliasSeq folded in. That result would be interesting. There may be some other common template patterns amenable to compiler shortcuts. As I've said before: AliasSeq is not the slow part. As long as we don't create many templates they can be as expensive as they want to be. Of course a speedup is always appreciated and I so no reason not to merge your PR.  Apr 25 Walter Bright <newshound2 digitalmars.com> writes: On 4/25/2020 3:32 AM, Stefan Koch wrote: As I've said before: AliasSeq is not the slow part. I agree. But as your benchmark showed, AliasSeq is worthwhile to do this optimization for. It doesn't take away motivation for ... at all.  Apr 25 Manu <turkeyman gmail.com> writes: On Sat, Apr 25, 2020 at 8:30 PM Walter Bright via Digitalmars-d < digitalmars-d puremagic.com> wrote: On 4/25/2020 2:49 AM, Stefan Koch wrote: This is supposed to make using staticMap cheap? No. It's to make AliasSeq cheap, to remove motivation for making a special syntactic construct for creating a tuple. It's still un-fun to type AliasSeq!(), and I've never liked how it reads or looks... but your patch is certainly welcome. What we really need, is this: AliasSeq!(0 .. 10) We should allow integer .. range in a tuple to describe an iota. We allow it in foreach statements, it's a sadly missed opportunity for D.  Apr 25 Walter Bright <newshound2 digitalmars.com> writes: On 4/25/2020 4:25 AM, Manu wrote: It's still un-fun to type AliasSeq!(), and I've never liked how it reads or looks... but your patch is certainly welcome. It was originally TypeTuple!(), and still is in druntime. (Yes, the PR recognizes TypeTuple, too, as it recognizes the pattern, not the identifier.) I don't know why it was changed to AliasSeq, which just grates on me as not indicating at all what it is. Probably Tuple would be the best name. What we really need, is this: AliasSeq!(0 .. 10) We should allow integer .. range in a tuple to describe an iota. We allow it in foreach statements, it's a sadly missed opportunity for D. Not a bad idea. (.. is also used in case ranges) I've also thought that more templates than AliasSeq might be suitable for compiler short-cutting.  Apr 25 Adam D. Ruppe <destructionator gmail.com> writes: On Saturday, 25 April 2020 at 21:21:12 UTC, Walter Bright wrote: I don't know why it was changed to AliasSeq, which just grates on me as not indicating at all what it is. People used to get confused over std.typecons.Tuple and the old TypeTuple. Moreover, TypeTuple can hold more than just types, so AliasSeq is arguably more descriptive. Of course, there's still .tupleof on structs which is more of an AliasSeq than a Tuple so the joy of confusion remains. Not a bad idea. (.. is also used in case ranges) eh case range is a different beast entirely. Though case 1..10: could perhaps expand to case 1: .. case 9:, I'd be OK having both. But that shows a key difference: foreach range is exclusive, case range is inclusive. I've also thought that more templates than AliasSeq might be suitable for compiler short-cutting. yessss though this staticMap... thing combined with CTFE arrays can replace a bunch of these templates. i'm still letting some cases percolate through my brain but it is a nice day outside today so I don't wanna spend 3 hours typing examples yet. yet ;)  Apr 25 Timon Gehr <timon.gehr gmx.ch> writes: On 25.04.20 23:21, Walter Bright wrote: On 4/25/2020 4:25 AM, Manu wrote: It's still un-fun to type AliasSeq!(), and I've never liked how it reads or looks... but your patch is certainly welcome. It was originally TypeTuple!(), and still is in druntime. (Yes, the PR recognizes TypeTuple, too, as it recognizes the pattern, not the identifier.) I don't know why it was changed to AliasSeq, which just grates on me as not indicating at all what it is. Probably Tuple would be the best name. Tuple does not indicate at all what it is. Tuples usually don't auto-expand.  May 09 Timon Gehr <timon.gehr gmx.ch> writes: On 24.04.20 10:03, Walter Bright wrote: On 4/24/2020 12:10 AM, Manu wrote: Your idea all falls over anywhere it encounters variadic args, or potential for overloads though. Without an explicit expression, the only way forward is to preserve existing D semantics (or it's a radically breaking change), and that will severely narrow the applicability of this proposal. Consider this: alias Tup = AliasSeq!(0, 2, 3); void fun(int); fun(Tup); // scalar argument receives tuple, it can expand, so: fun(0), fun(1), fun(2) Write it as: Tup.fun(); ... That's the same thing. import std.stdio, std.meta; alias seq = AliasSeq!(0, 2, 3); void fun(int a, int b, int c){ writeln(a," ",b," ",c); } void main(){ seq.fun(); // 0 2 3 } Hijacking UFCS to designate some independent distinction is bad language design. Language features should be orthogonal. Of course, fun(1, Tup); cannot be rewritten in this way, but then a rationale will be necessary as to why it must be written as func(1, Tup) instead of fun(Tup, 1). Certainly not. The order of parameters of func is what it is and depending on the situation you will want to expand one or the other.  May 09 Manu <turkeyman gmail.com> writes: On Fri, Apr 24, 2020 at 3:51 PM Manu <turkeyman gmail.com> wrote: On Fri, Apr 24, 2020 at 2:20 PM Walter Bright via Digitalmars-d < digitalmars-d puremagic.com> wrote: On 4/22/2020 5:04 AM, Manu wrote: [...] Ok, I've had a chance to think about it. It's a scathingly brilliant idea! But (there's always a but!) something stuck out at me. Consider arrays: void test() { auto a = [1, 2, 3]; int[3] b = a[]*a[]; // b[0] = a[0]*a[0]; b[1] = a[1]*a[1]; b[2] = a[2]*a[2]; int[3] c = a[]*2; // c[0] = a[0]*2; c[1] = a[1]*2; c[2] = a[2]*2; } These look familiar! D tuples already use array syntax - they can be indexed and sliced. Instead of the ... syntax, just use array syntax! The examples from the DIP: ===================================== --- DIP (Tup*10)... --> ( Tup[0]*10, Tup[1]*10, ... , Tup[$-1]*10 )

--- Array syntax
Tup*10

====================================
--- DIP
alias Tup = AliasSeq!(1, 2, 3);
int[] myArr;
assert([ myArr[Tup + 1]... ] == [ myArr[Tup[0] + 1], myArr[Tup[1] + 1],
myArr[Tup[2] + 1] ]);

--- Array
alias Tup = AliasSeq!(1, 2, 3);
int[] myArr;
assert([ myArr[Tup + 1] ] == [ myArr[Tup[0] + 1], myArr[Tup[1] + 1],
myArr[Tup[2] + 1] ]);

===================================
---DIP
alias Values = AliasSeq!(1, 2, 3);
alias Types = AliasSeq!(int, short, float);
pragma(msg, cast(Types)Values...);

---Array
alias Values = AliasSeq!(1, 2, 3);
alias Types = AliasSeq!(int, short, float);
pragma(msg, cast(Types)Values);

=================================
---DIP
alias OnlyTwo = AliasSeq!(10, 20);
pragma(msg, (Values + OnlyTwo)...);

---Array
alias OnlyTwo = AliasSeq!(10, 20);
pragma(msg, Values + OnlyTwo);

The idea is simply if we have:

t op c

where t is a tuple and c is not, the result is:

tuple(t[0] op c, t[1] op c, ..., t[length - 1] op c)

For:

t1 op t2

the result is:

tuple(t1[0] op t2[0], t1[1] op t2[1], ..., t1[length - 1] op
t2[length - 1])

The AST doesn't have to be walked to make this work, just do it as part
of the
usual bottom-up semantic processing.

I thought about this, but this reaches much further than a op b .
When I considered your approach, it appeared to add a lot of edges and
limits on the structure of the expressions, particularly where it interacts

1. no new grammar

Fortunately, the grammar is trivial.

2. no new operator precedence rules
3. turn expressions that are currently errors into doing the obvious thing

This is compelling, but I couldn't think how it can work from end to end.

Why does C++ use ... rather than array syntax? Because C++ doesn't have
arrays!

Another reason I introduce ... is for static fold.
The follow-up to this DIP would make this expression work:

Tup + ...  ->  Tup[0] + Tup[1] + ... + Tup[$-1] For instance, up-thread it was noted that a static-fold algorithm may implement a find-type-in-tuple; it would look like this: is(MyType == Types) || ... <- evaluate true if MyType is present in Types with no template instantiation junk. So, the ... is deliberately intended to being additional value. Can you show how your suggestion applies to some more complex cases (not yet noted in the DIP). // controlled expansion: alias Tup = AliasSeq!(0, 1, 2); alias Tup2 = AliasSeq!(3, 4, 5); [ Tup, Tup2... ]... -> [ 0, 3, 4, 5 ], [ 1, 3, 4, 5 ], [ 2, 3, 4, 5 ] // template instantiations alias TTup = AliasSeq!(int, float, char); MyTemplate!(Tup, TTup.sizeof...)... -> MyTemplate!(0, 4, 4, 1), MyTemplate!(1, 4, 4, 1), MyTemplate!(2, 4, 4, 1) // replace staticMap alias staticMap(alias F, T...) = F!T...; // more controlled expansion, with template arg lists AliasSeq!(10, Tup, 20)... -> ( 10, 0, 20, 10, 1, 20, 10, 2, 20 ) AliasSeq!(10, Tup..., 20) -> ( 10, 0, 1, 2, 20 ) // static fold (outside the scope of this DIP, but it's next in line) Tup + ... -> Tup[0] + Tup[1] + ... + Tup[$-1]

// static find
is(MyType == Types) || ...

That said, with respect to these fold expressions, it would be ideal if
they applied to arrays equally as I propose to tuples.

I guess this is the key case you need to solve for:

template T(Args...) {}
T!(Tup)     -> T!(0, 1, 2)
T!(Tup)...  -> T!0, T!1, T!2

And the parallel expansion disambiguation is also critical:
T!(Tup, Tup2...)...  -> T!(0, 3, 4, 5), T!(1, 3, 4, 5), T!(2, 3, 4, 5)

If you can solve those, the rest will probably follow.

Apr 23
Walter Bright <newshound2 digitalmars.com> writes:
On 4/23/2020 11:00 PM, Manu wrote:
[...]

Please do not re-quote every ancestor of the thread. Just quote enough to give
context. I do know how to use the threaded view of the n.g. reader if I need
more context.

Apr 24
Walter Bright <newshound2 digitalmars.com> writes:
On 4/23/2020 11:00 PM, Manu wrote:
I guess this is the key case you need to solve for:

template T(Args...) {}
T!(Tup)     -> T!(0, 1, 2)
T!(Tup)...  -> T!0, T!1, T!2

And the parallel expansion disambiguation is also critical:
T!(Tup, Tup2...)...  -> T!(0, 3, 4, 5), T!(1, 3, 4, 5), T!(2, 3, 4, 5)

If you can solve those, the rest will probably follow.

Fair enough. Though there needs to be a rationale as to why those two
particuler
cases are needed and critical.

Apr 24
Walter Bright <newshound2 digitalmars.com> writes:
On 4/24/2020 1:24 AM, Walter Bright wrote:
On 4/23/2020 11:00 PM, Manu wrote:
I guess this is the key case you need to solve for:

template T(Args...) {}
T!(Tup)     -> T!(0, 1, 2)
T!(Tup)...  -> T!0, T!1, T!2

And the parallel expansion disambiguation is also critical:
T!(Tup, Tup2...)...  -> T!(0, 3, 4, 5), T!(1, 3, 4, 5), T!(2, 3, 4, 5)

If you can solve those, the rest will probably follow.

Fair enough. Though there needs to be a rationale as to why those two
particuler
cases are needed and critical.

Please keep in mind that the following works today:

void foo(int);

template tuple(T...) { enum tuple = T; }

void test() {
auto t = tuple!(1, 2, 3);
static foreach (i, e; t)
foo(e + i);
}

and generates:

void test() {
foo(1); foo(3); foo(5);
}

Apr 24
On Friday, 24 April 2020 at 08:35:53 UTC, Walter Bright wrote:
On 4/24/2020 1:24 AM, Walter Bright wrote:
On 4/23/2020 11:00 PM, Manu wrote:
[...]

Fair enough. Though there needs to be a rationale as to why
those two particuler cases are needed and critical.

Please keep in mind that the following works today:

void foo(int);

template tuple(T...) { enum tuple = T; }

void test() {
auto t = tuple!(1, 2, 3);
static foreach (i, e; t)
foo(e + i);
}

and generates:

void test() {
foo(1); foo(3); foo(5);
}

Because of implementation issues in static foreach that'll take
forever.
Note: those issues are actually impossible to slove in the
general case.

Apr 24
Manu <turkeyman gmail.com> writes:
On Fri, Apr 24, 2020 at 6:40 PM Walter Bright via Digitalmars-d <
digitalmars-d puremagic.com> wrote:

On 4/24/2020 1:24 AM, Walter Bright wrote:
On 4/23/2020 11:00 PM, Manu wrote:
I guess this is the key case you need to solve for:

template T(Args...) {}
T!(Tup)     -> T!(0, 1, 2)
T!(Tup)...  -> T!0, T!1, T!2

And the parallel expansion disambiguation is also critical:
T!(Tup, Tup2...)...  -> T!(0, 3, 4, 5), T!(1, 3, 4, 5), T!(2, 3, 4,

5)
If you can solve those, the rest will probably follow.

Fair enough. Though there needs to be a rationale as to why those two

particuler
cases are needed and critical.

Please keep in mind that the following works today:

void foo(int);

template tuple(T...) { enum tuple = T; }

void test() {
auto t = tuple!(1, 2, 3);
static foreach (i, e; t)
foo(e + i);
}

and generates:

void test() {
foo(1); foo(3); foo(5);
}

static foreach is not an expression, and it's very hard to involve those
result calls in some conjunction. Expand that code to || them together...
it gets ugly real fast.
I wouldn't have wasted my time writing this DIP and a reference
implementation if static foreach was fine.

Find instances of staticMap in phobos and/or user code, and show how you
can replace them with static foreach, then there's something to talk about.

Apr 24
Walter Bright <newshound2 digitalmars.com> writes:
On 4/24/2020 1:49 AM, Manu wrote:
static foreach is not an expression, and it's very hard to involve those
result
calls in some conjunction. Expand that code to || them together... it gets
ugly
real fast.
I wouldn't have wasted my time writing this DIP and a reference implementation
if static foreach was fine.

This is why I suggested in the "Challenge" thread that these need to be
motivating examples in the proposed DIP, not the ones that are in there, which
are not particularly motivating.

Apr 24
On Friday, 24 April 2020 at 22:42:13 UTC, Walter Bright wrote:
On 4/24/2020 1:49 AM, Manu wrote:
static foreach is not an expression, and it's very hard to
involve those result calls in some conjunction. Expand that
code to || them together... it gets ugly real fast.
I wouldn't have wasted my time writing this DIP and a
reference implementation if static foreach was fine.

This is why I suggested in the "Challenge" thread that these
need to be motivating examples in the proposed DIP, not the
ones that are in there, which are not particularly motivating.

here is one.
For an alias seq containing the integers from 0 to 4096
construct another alias seq containing the every value of the
original one with 3 added to it.
emit the last element of the newly created tuple plus the last
element of the old tuple (should equal 8194) pragma(msg).

The compile time for that.
Has to take less than 50 milliseconds.
(as it does with our current proposal)

Apr 24
On Friday, 24 April 2020 at 22:42:13 UTC, Walter Bright wrote:
On 4/24/2020 1:49 AM, Manu wrote:
static foreach is not an expression, and it's very hard to
involve those result calls in some conjunction. Expand that
code to || them together... it gets ugly real fast.
I wouldn't have wasted my time writing this DIP and a
reference implementation if static foreach was fine.

This is why I suggested in the "Challenge" thread that these
need to be motivating examples in the proposed DIP, not the
ones that are in there, which are not particularly motivating.

There is one meta answer to the challenge as well.
The solution you propose to the examples given has to be come
with a a description of the rule applied, that is a simple as
"All tuples which fall under constraint A create tuples,
according to rule B)." Which of course pre-supposes that this
will be part of the specification.

Apr 24
On Friday, 24 April 2020 at 08:35:53 UTC, Walter Bright wrote:
On 4/24/2020 1:24 AM, Walter Bright wrote:

Please keep in mind that the following works today:

void foo(int);

template tuple(T...) { enum tuple = T; }

void test() {
auto t = tuple!(1, 2, 3);
static foreach (i, e; t)
foo(e + i);
}

and generates:

void test() {
foo(1); foo(3); foo(5);
}

not on 64bit.
on 64bit it's cannot pass argument cast(ulong)__t_field_0 + 0LU
of type ulong to parameter int.
on 32 bit it genrates
(int, int, int) t = tuple(1, 2, 3);
foo(cast(ulong)__t_field_0 + 0LU);
foo(cast(ulong)__t_field_1 + 1LU);
foo(cast(ulong)__t_field_2 + 2LU);

Apr 24
On Friday, 24 April 2020 at 08:52:41 UTC, Stefan Koch wrote:
On Friday, 24 April 2020 at 08:35:53 UTC, Walter Bright wrote:
On 4/24/2020 1:24 AM, Walter Bright wrote:

Please keep in mind that the following works today:

void foo(int);

template tuple(T...) { enum tuple = T; }

void test() {
auto t = tuple!(1, 2, 3);
static foreach (i, e; t)
foo(e + i);
}

and generates:

void test() {
foo(1); foo(3); foo(5);
}

not on 64bit.
on 64bit it's cannot pass argument cast(ulong)__t_field_0 + 0LU
of type ulong to parameter int.
on 32 bit it genrates
(int, int, int) t = tuple(1, 2, 3);
foo(cast(ulong)__t_field_0 + 0LU);
foo(cast(ulong)__t_field_1 + 1LU);
foo(cast(ulong)__t_field_2 + 2LU);

Edit cast(ulong) -> cast(uint)

Apr 24
Manu <turkeyman gmail.com> writes:
On Fri, Apr 24, 2020 at 6:25 PM Walter Bright via Digitalmars-d <
digitalmars-d puremagic.com> wrote:

On 4/23/2020 11:00 PM, Manu wrote:
I guess this is the key case you need to solve for:

template T(Args...) {}
T!(Tup)     -> T!(0, 1, 2)
T!(Tup)...  -> T!0, T!1, T!2

And the parallel expansion disambiguation is also critical:
T!(Tup, Tup2...)...  -> T!(0, 3, 4, 5), T!(1, 3, 4, 5), T!(2, 3, 4, 5)

If you can solve those, the rest will probably follow.

Fair enough. Though there needs to be a rationale as to why those two
particuler
cases are needed and critical.

That is essentially the whole thing, simmered down to it's essence. Solve
those, we have everything.
If you can't solve those, we'll have a sea of non-uniformity, edge cases,
workarounds, and weird shit; classic D.

I haven't had experience with this feature yet; I don't know what's going
to emerge... but the last thing D needs is more weird edge cases.
But this is a clean, uniform, and very efficient design. I would be
disappointed to accept anything less.

I completely understand your motive, it's exactly where I started too. I
couldn't find a way to resolve the ambiguity issues, and I'm not really
interested in proposing a highly compromised design with workarounds to
suit various usage contexts; that's just mental baggage you have to carry
to solve application in all the different usage scenarios.

I'd like you to solve the cases I show above with your proposal. Find a way
to do that, I'm super into your solution. If not, then I'm pretty invested
in my DIP as specced.

Also, can you show all the cases where expansion can and can't occur under
These are cases where it must be inhibited:
1. calling function (because overloads/varargs, risk of hijacking)
2. template instantiations (because variadic arguments)
3. UFCS (because overloads + varargs)
4. array indexing (because overloaded opIndex for N-dimensional arrays)

We also lose the expansion for static fold.

Are there more applications that we lose?

It's possible that it must be inhibited ANYWHERE left of a . because to
the right might be a function, and/or because UFCS, we can not allow it due
to risk of hijacking :/
I don't think auto-expand is safely applicable to enough contexts. I really
think the explicit syntax is a better approach, and I'm interested in
seeing how to apply the expand syntax to arrays for uniformity.

Apr 24
Sebastiaan Koppe <mail skoppe.eu> writes:
On Friday, 24 April 2020 at 09:52:48 UTC, Manu wrote:
I don't think auto-expand is safely applicable to enough
contexts. I really
think the explicit syntax is a better approach, and I'm
interested in
seeing how to apply the expand syntax to arrays for uniformity.

I also think we should make it explicit. Makes it far better.

Otherwise I'll never trust any [] expression I'll see; always
wondering: "does this expand?". It is likely the implementation
is easier as well.

Apr 24
Steven Schveighoffer <schveiguy gmail.com> writes:
On 4/24/20 4:24 AM, Walter Bright wrote:
On 4/23/2020 11:00 PM, Manu wrote:
I guess this is the key case you need to solve for:

template T(Args...) {}
T!(Tup)     -> T!(0, 1, 2)
T!(Tup)...  -> T!0, T!1, T!2

And the parallel expansion disambiguation is also critical:
T!(Tup, Tup2...)...  -> T!(0, 3, 4, 5), T!(1, 3, 4, 5), T!(2, 3, 4, 5)

If you can solve those, the rest will probably follow.

Fair enough. Though there needs to be a rationale as to why those two
particuler cases are needed and critical.

I think there's a fundamental piece missing from the requirements, you
need to be able to specify where the expansion should stop. Otherwise,
the expressions become very difficult to use effectively.

e.g.:

T!(U!(V!(Tup))...) -> T!(U!(V!0), U!(V!1), U!(V!2))

-Steve

Apr 24
rikki cattermole <rikki cattermole.co.nz> writes:
This can resolve my complaint about syntax and I can happily but out of
discussions if this is the direction we go :P

Apr 24
On Friday, 24 April 2020 at 04:15:36 UTC, Walter Bright wrote:
Ok, I've had a chance to think about it. It's a scathingly
brilliant idea!

The AST doesn't have to be walked to make this work, just do it
as part of the usual bottom-up semantic processing.

I honestly don't care about which syntax we chose, but I care
about having the expansion logic be centralized in the compiler.
It almost sounds like you want to make the an inline part of the
regular semantic pass.
If that is done, it's going to be hard to change if we discover
that special handling is needed for particular nodes/node
combinations .tupleof comes to mind.

I should also note that the DIP does reduce the size of the trees
to sqrt(N) if the most of the size came from recursive
instantiations of staticMap.
Therefore a bottom-up work if the subtree is not particularly
expensive when compared to having a weaker more inflexible
version of this feature.

Apr 24
On Friday, 24 April 2020 at 08:25:01 UTC, Stefan Koch wrote:

Therefore a bottom-up work if the subtree

Therefore a bottom-up walk of the sub-tree

Apr 24
On Friday, 24 April 2020 at 08:26:41 UTC, Stefan Koch wrote:
On Friday, 24 April 2020 at 08:25:01 UTC, Stefan Koch wrote:

Therefore a bottom-up work if the subtree

Therefore a bottom-up walk of the sub-tree

Argh. I meant top-down walk.

Apr 24
On Friday, 24 April 2020 at 08:26:41 UTC, Stefan Koch wrote:
On Friday, 24 April 2020 at 08:25:01 UTC, Stefan Koch wrote:

Therefore a bottom-up work if the subtree

Therefore a bottom-up walk of the sub-tree

I meant top-down walk.

Apr 24
Steven Schveighoffer <schveiguy gmail.com> writes:
On 4/24/20 12:15 AM, Walter Bright wrote:
On 4/22/2020 5:04 AM, Manu wrote:
[...]

Ok, I've had a chance to think about it. It's a scathingly brilliant idea!

But (there's always a but!) something stuck out at me. Consider arrays:

void test()
{
auto a = [1, 2, 3];
int[3] b = a[]*a[]; // b[0] = a[0]*a[0]; b[1] = a[1]*a[1]; b[2] =
a[2]*a[2];
int[3] c = a[]*2; // c[0] = a[0]*2; c[1] = a[1]*2; c[2] = a[2]*2;
}

These look familiar! D tuples already use array syntax - they can be
indexed and sliced. Instead of the ... syntax, just use array syntax!

Hm... but how do you know where to expand the expression and where to stop?

With Manu's proposal:

foo(bar(T)...) -> foo(bar(T[0]), bar(T[1]))

foo(bar(T))... -> foo(bar(T[0])), foo(bar(T[1]))

With array proposal:

foo(bar(T)) -> which one?

The examples from the DIP:

=====================================
--- DIP
(Tup*10)...  -->  ( Tup[0]*10, Tup[1]*10, ... , Tup[$-1]*10 ) --- Array syntax Tup*10 ==================================== --- DIP alias Tup = AliasSeq!(1, 2, 3); int[] myArr; assert([ myArr[Tup + 1]... ] == [ myArr[Tup[0] + 1], myArr[Tup[1] + 1], myArr[Tup[2] + 1] ]); --- Array alias Tup = AliasSeq!(1, 2, 3); int[] myArr; assert([ myArr[Tup + 1] ] == [ myArr[Tup[0] + 1], myArr[Tup[1] + 1], myArr[Tup[2] + 1] ]); Why is it not assert( [myArr[Tup + 1] ] == [ myArr[Tup[0] + 1]], myArr[Tup[1] + 1]] I think you absolutely need the tag to say which expressions are affected. An important thing to note is that we can't store expressions from statement to statement. This doesn't work (assuming array proposal): alias arrayParams = myArr[Tup + 1]; // error, can't read blah blah blah at compile time. auto newArr = [arrayParams]; // would be the equivalent of [ myArr[Tup + 1]... ] Another big problem is that tuples already bind to parameter lists, so this would be a breaking change or else be crippled for many uses. e.g.: foo(int x, int y = 5); alias seq = AliasSeq!(1, 2); foo(seq) -> foo(1, 5), foo(2, 5) or foo(1, 2)? -Steve  Apr 24 Steven Schveighoffer <schveiguy gmail.com> writes: On 4/24/20 8:47 AM, Steven Schveighoffer wrote: assert( [myArr[Tup + 1] ] == [ myArr[Tup[0] + 1]], myArr[Tup[1] + 1]] Missed a bracket: assert( [myArr[Tup + 1] ] == [ myArr[Tup[0] + 1]], [ myArr[Tup[1] + 1]] -Steve  Apr 24 Piotr Mitana <piotr.mitana gmail.com> writes: On Wednesday, 22 April 2020 at 12:04:30 UTC, Manu wrote: https://github.com/dlang/DIPs/pull/188 My two cents; the idea is nice, however I would consider a bit different syntax. 1. We already have std.typecons.Tuple.expand, which does the similar thing. If it is doable with a library solution, it could be Expand!(TypeTuple). 2. If not, the ... operator is a good choice here, but I would suggest making it prefix, and not postfix. First, it would nicely differenciate it from variadic operator. Second, it would be straightforward and consistent for those using Javascript as well, as ... is a spread operator there - call(...[a, b, c]) is equivalent to call(a, b, c). Such a spread operator in D could work for TypeTuples, value tuples and static arrays. Simple, conscise and similar with what one of the most widely used languages does.  Apr 24 Mafi <mafi example.org> writes: On Wednesday, 22 April 2020 at 12:04:30 UTC, Manu wrote: We have a compile time problem, and this is basically the cure. Intuitively, people imagine CTFE is expensive (and it kinda is), but really, the reason our compile times are bad is template instantiation. This DIP single-handedly fixes compile-time issues in programs I've written by reducing template instantiations by near-100%, in particular, the expensive ones; recursive instantiations, usually implementing some form of static map. https://github.com/dlang/DIPs/pull/188 This is an RFC on a draft, but I'd like to submit it with a reference implementation soon. Stefan Koch has helped me with a reference implementation, which has so far gone surprisingly smoothly, and has shown 50x improvement in compile times in some artificial tests. I expect much greater improvements in situations where recursive template expansion reaches a practical threshold due to quadratic resource consumption used by recursive expansions (junk template instantiations, and explosive symbol name lengths). This should also drastically reduce compiler memory consumption in meta-programming heavy applications. In addition to that, it's simple, terse, and reduces program logic indirection via 'utility' template definitions, which I find improves readability substantially. We should have done this a long time ago. - Manu There is more corner cases to consider: AliasSeq-like members and tuple slicing. What happens when a struct (or whatever) has a AliasSeq-like member, most prominently std.typecons.Tuple.expand. Tuple!(short, int, long) t = tuple(short(1), int(2), long(3)); writeln(AliasSeq!("x", t.expand)); // => x 1 2 3 writeln(AliasSeq!("x", t.expand...)); // => x 1 2 3 (unambiguously, although semantics already unclear) writeln(AliasSeq!("x", t.expand)...); // => ? does the ...-operator inspect the member to know it is a AliasSeq-like? writeln(AliasSeq!("x", ((() => t)()).expand)...); // ? writeln(AliasSeq!("x", ({ static if(t.expand == 1) { return t } else { return tuple(4, 5, 6); } }()).expand)...); // it can't possibly expand here because the inner tuple already depends on the ..-expansion of t.expand == 1 The only sane way seems to be to treat .expand and any other member as opaque to-be-determined entity which is skipped by ...-expansion. Otherwise you get a inconsistency between cases 3 and 4/5. You have to explicitly opt-in by doing: alias e = t.expand; writeln(AliasSeq!("x", t.expand)); // => x 1 x 2 x 3 The other corner case is AliasSeq-slicing. writeln(AliasSeq!("x", AliasSeq!(1, 2, 3))...); // no named sequence, no expansion => x 1 2 3 writeln(AliasSeq!("x", e)...); //named, therefore must be expansion => x 1 x 2 x 3 writeln(AliasSeq!("x", e[1..$])...); // ?

Is the last case of e[1..$] just an expression that happens to evaluate to an AliasSeq (like example here), or more like a named expression like example 2 here? These cases also need to specified in the DIP.  Apr 24 Walter Bright <newshound2 digitalmars.com> writes: On 4/22/2020 5:04 AM, Manu wrote: [...] The examples in the DIP are, frankly, too trivial to make a case for the ... feature. So here's the challenge to everyone: a small (5?) collection of non-trivial motivating examples that show how it is done in D today, and how it is done with ..., and how much better the ... is.  Apr 24 Steven Schveighoffer <schveiguy gmail.com> writes: On 4/24/20 5:26 PM, Walter Bright wrote: On 4/22/2020 5:04 AM, Manu wrote: [...] The examples in the DIP are, frankly, too trivial to make a case for the ... feature. So here's the challenge to everyone: a small (5?) collection of non-trivial motivating examples that show how it is done in D today, and how it is done with ...., and how much better the ... is. I have a couple: std.meta.NoDuplicates, and std.meta.Filter template NewFilter(alias pred, T...) { import std.meta : AliasSeq; static if(T.length == 1) { static if(pred!T) alias NewFilter = AliasSeq!(T); else alias NewFilter = AliasSeq!(); } else { alias NewFilter = NewFilter!(pred, T)...; } } Original implementation here: https://github.com/dlang/phobos/blob/9844c34196c6f34743bfb4878d78cba804a57bf9/std/meta.d#L878-L898 Note that I've cut down the template instantiations from N *lg(N) instantiations of Filter to N instantiations of NewFilter, and only one level of recursion. Not only that, but... If you have: Filter!(Pred, SomeLongTuple); NewFilter!(Pred, SomeLongTuple); then Filter!(Pred, SomeLongTuple[1 ..$]);

is going to produce a large amount of different instantiations (because
of the divide and conquer recursion). Mostly only the leaves will be reused.

But

NewFilter!(PRed, SomeLongTuple[1 .. $]); will generate ONE more instantiation, because all the rest will have been done from the first time. Note that I have tested this in the proposed branch, with a few workarounds for syntax (Manu is working on this). And it passes the same unittests for Filter. Now, for NoDuplicates, I have a personal interest in seeing this one be fixed: template staticIota(size_t N) { import std.meta : AliasSeq; string buildIota() { import std.range : iota; import std.format : format; return format("alias staticIota = AliasSeq!(%(%d,%));", iota(N)); } mixin(buildIota()); } template NewNoDuplicates(T...) { import std.meta : AliasSeq; template getAlias(size_t idx) { alias list = T[0 .. idx]; alias item = T[idx]; import std.algorithm: canFind; static if([__traits(isSame, list, item)...].canFind(true)) alias getAlias = AliasSeq!(); else alias getAlias = AliasSeq!(T[idx]); } alias idxList = staticIota!(T.length)[1 ..$];
alias NewNoDuplicates = AliasSeq!(T[0], getAlias!(idxList)...);
}

Original implementation here:
https://github.com/dlang/phobos/blob/9844c34196c6f34743bfb4878d78cba804a57bf9/std/meta.d#L405-L505

(yes, that's 100 lines, but includes docs and unittests)

Comparison of instantiations is laughable. I'm instantiating one
template per element, plus the original NewNoDuplicates, vs. whatever
happens in std.meta (there are several helper templates).

I believe the complexity of the original filter is N * N * lg(N), where
as mine is N * N. If we had first-class type manipulation in CTFE, then
we could get it down to N lg(N) by sorting the list. Also, I can
probably avoid canFind and whatever it imports if I wanted to just
search with a simple loop function for any true values in a local CTFE
function.

I also tested this in the new branch, and it works, but again there are
some bugs in the implementation Manu is working on, so the compilable
version doesn't look as pretty.

Note, I think staticIota is going to be super-important if we get this
... (or whatever) change in, because you can do a lot of things now with
a tuple of indexes. The above is crude, and probably can be improved,
but note how we don't need all the vagaries of iota, because we can just
use ... expressions to make whatever sequence we need out of 0 .. N.

That's why I think something like AliasSeq!(0 .. N) handled by the
compiler would be tremendously useful.

-Steve

Apr 25
Steven Schveighoffer <schveiguy gmail.com> writes:
On 4/24/20 5:26 PM, Walter Bright wrote:
On 4/22/2020 5:04 AM, Manu wrote:
[...]

The examples in the DIP are, frankly, too trivial to make a case for the
... feature.

So here's the challenge to everyone: a small (5?) collection of
non-trivial motivating examples that show how it is done in D today, and
how it is done with ...., and how much better the ... is.

also note, we can obsolete std.meta.anySatisfy or std.meta.allSatisfy,
as we can use CTFE and ... expressions to do the same thing:

enum orig = anySatisfy!(someTemplate, someList);

enum changed = [someTemplate!(someList)...].any(true);

I propose a function any which takes a range and returns true if any
element matches the provided one. And another one all(val) which does
the same if all are the value. This gives us maximum flexibility.

Not only that, but that pattern can be used without temporary little
someTemplate things, which are inevitably needed for anySatisfy.

If you look in my NoDuplicates example, I have used this pattern like:

static if([__traits(isSame, tuple, elem)...].any(true));

No templates needed.

-Steve

Apr 25
Paul Backus <snarwin gmail.com> writes:
On Saturday, 25 April 2020 at 15:17:35 UTC, Steven Schveighoffer
wrote:
I propose a function any which takes a range and returns true
if any element matches the provided one. And another one
all(val) which does the same if all are the value. This gives
us maximum flexibility.

Looks like std.algorithm.any and std.algorithm.all can already do
this.

Apr 25
Steven Schveighoffer <schveiguy gmail.com> writes:
On 4/25/20 2:52 PM, Paul Backus wrote:
On Saturday, 25 April 2020 at 15:17:35 UTC, Steven Schveighoffer wrote:
I propose a function any which takes a range and returns true if any
element matches the provided one. And another one all(val) which does
the same if all are the value. This gives us maximum flexibility.

Looks like std.algorithm.any and std.algorithm.all can already do this.

Nice! The only thing I would say is I don't want to have a giant nest of
subcalls for something like this, which can be done with one foreach
loop. I'll have to look at the implementation.

-Steve

Apr 26
On Sunday, 26 April 2020 at 16:07:30 UTC, Steven Schveighoffer
wrote:
On 4/25/20 2:52 PM, Paul Backus wrote:
On Saturday, 25 April 2020 at 15:17:35 UTC, Steven
Schveighoffer wrote:
I propose a function any which takes a range and returns true
if any element matches the provided one. And another one
all(val) which does the same if all are the value. This gives
us maximum flexibility.

Looks like std.algorithm.any and std.algorithm.all can already
do this.

Nice! The only thing I would say is I don't want to have a
giant nest of subcalls for something like this, which can be
done with one foreach loop. I'll have to look at the
implementation.

-Steve

If you need this implementation to be extended please drop me a
line.

Apr 26
Adam D. Ruppe <destructionator gmail.com> writes:
I think I just had a use for this.

I have a const Rectangle and wanted to shift the whole thing
down and right one unit.

Wrote:

Rectangle(r.l + 1, r.t + 1, r.b + 1, r.r + 1);

Would have been kinda cool if I could have done

Rectangle((r.tupleof + 1)...)

I don't think that can be done today since functions cannot
return tuples and templates cannot use runtime values.

Though well it perhaps could be done today with something like

auto tupleMap(alias what, T...)(T t) {
struct Ret { T pieces; }
foreach(ref arg; t)
arg = what(arg);
return Ret(t);
}

Rectangle(tupleMap!(a => a+1)(r.tupleof).tupleof);

I think that'd work and it isn't recursive template instantiation
but it is a bit wordy.

I think this new static map thing could be useful for runtime
values as well as template things.

May 07
Adam D. Ruppe <destructionator gmail.com> writes:
Another potential use for this would be writing type translation
functions.

I've written something along these lines many times:

ParameterTypeTuple!T fargs;
foreach(idx, a; fargs) {
if(idx == args.length)
break;
cast(Unqual!(typeof(a))) fargs[idx] =
args[idx].get!(typeof(a));

This converts arguments of a dynamic type to the static parameter
types of a given function in preparation to call it.

That ugly cast on the lhs there is to deal with const.

void foo(const int a) {}

That function there needs the cast to assign to the const param.

Well, with the static map, we MIGHT be able to just do

foo(fromDynamic(fargs, args.pop)...)

or something like that which expands in place so the mutable
thing is implicitly cast to const without the explicit cast
intermediate.

I haven't tried that btw i just think it might work given the
concept. That would be impossible with staticMap as-is because of
the runtime variable in there.

May 08
Manu <turkeyman gmail.com> writes:
On Sat, May 9, 2020 at 1:50 AM Adam D. Ruppe via Digitalmars-d <
digitalmars-d puremagic.com> wrote:

Another potential use for this would be writing type translation
functions.

I've written something along these lines many times:

ParameterTypeTuple!T fargs;
foreach(idx, a; fargs) {
if(idx == args.length)
break;
cast(Unqual!(typeof(a))) fargs[idx] =
args[idx].get!(typeof(a));

This converts arguments of a dynamic type to the static parameter
types of a given function in preparation to call it.

That ugly cast on the lhs there is to deal with const.

void foo(const int a) {}

That function there needs the cast to assign to the const param.

Well, with the static map, we MIGHT be able to just do

foo(fromDynamic(fargs, args.pop)...)

or something like that which expands in place so the mutable
thing is implicitly cast to const without the explicit cast
intermediate.

I haven't tried that btw i just think it might work given the
concept. That would be impossible with staticMap as-is because of
the runtime variable in there.

Yes! I use this pattern _all the time_ in C++, and it's definitely a
motivating use case for ... as I see it.
This is an intended and important use case.

May 08
Q. Schroll <qs.il.paperinik gmail.com> writes:
There are some corner cases I'd like to have an answer to:

void f(int i, long l) { }

import std.typecons : Tuple;

alias Tup = Tuple!(int, long);
Tup t = Tup.init;

f(t...); //?

Would that //? line work, i.e. use alias expand this of
std.typecons.Tuple? (Currently, without the dots, it doesn't
work!) If so, say I have nested tuple types like these:

alias AB = Tuple!(A, B);
alias CD = Tuple!(C, D);
alias T = Tuple!(AB, CD);

void g(AB ab, CD cd) { }
void h(A a, B b, C c, D d) { }

T t = T.init;
g(t...); //1  t... should be (t.expand[0], t.expand[1])
h(t...); //2
h((t...)...); //3  (t...)... should be
(t.expand[0].expand[0], t.expand[0].expand[1],
t.expand[1].expand[0], t.expand[1].expand[1])

I'd expect //1 to work. Does ... expand multiple times if it
needs to like in //2 or would I need to do the thing in //3?
Would //3 even work?

What does t.expand... mean? Without much thinking, I'd expect
(t.expand[0], t.expand[1]), but if ... really walks into the
expression, seeing . as a binary operator, it should be
expanding t and expand in parallel, for t using its alias
this. That way, it boils down to (t.expand[0] . expand[0],
t.expand[1] . expand[1]).

Should we be allowed to write (t.expand)... and t.(expand...) to
clarify what we mean?

(I want this DIP to succeed, so I'll walk into every corner and
expose the cases for them to be addressed.)

May 08
Adam D. Ruppe <destructionator gmail.com> writes:
On Friday, 8 May 2020 at 20:53:39 UTC, Q. Schroll wrote:
alias Tup = Tuple!(int, long);

This is just a struct, the ... shouldn't do anything to it
(probably should be a syntax error).

The ... thing is all about compiler tuples which are
unfortunately named the same but totally separate to library
tuples.

May 08
Manu <turkeyman gmail.com> writes:
On Sat, May 9, 2020 at 7:20 AM Adam D. Ruppe via Digitalmars-d <
digitalmars-d puremagic.com> wrote:

On Friday, 8 May 2020 at 20:53:39 UTC, Q. Schroll wrote:
alias Tup = Tuple!(int, long);

This is just a struct, the ... shouldn't do anything to it
(probably should be a syntax error).

The ... thing is all about compiler tuples which are
unfortunately named the same but totally separate to library
tuples.

^^^
std.typecons.Tuple is a struct. It's not a tuple.

May 08
Timon Gehr <timon.gehr gmx.ch> writes:
On 09.05.20 02:21, Manu wrote:
On Sat, May 9, 2020 at 7:20 AM Adam D. Ruppe via Digitalmars-d
<digitalmars-d puremagic.com <mailto:digitalmars-d puremagic.com>> wrote:

On Friday, 8 May 2020 at 20:53:39 UTC, Q. Schroll wrote:
>     alias Tup = Tuple!(int, long);

This is just a struct, the ... shouldn't do anything to it
(probably should be a syntax error).

The ... thing is all about compiler tuples which are
unfortunately named the same but totally separate to library
tuples.

^^^
std.typecons.Tuple is a struct. It's not a tuple.

It implements a tuple. It's just not a weird built-in compiler "tuple".

May 09
Q. Schroll <qs.il.paperinik gmail.com> writes:
On Friday, 8 May 2020 at 21:18:26 UTC, Adam D. Ruppe wrote:
On Friday, 8 May 2020 at 20:53:39 UTC, Q. Schroll wrote:
alias Tup = Tuple!(int, long);

This is just a struct, the ... shouldn't do anything to it
(probably should be a syntax error).

It surely is a struct, but it's a struct with an alias-this to a
tuple of the kind you're discussing. That's what my question was

The ... thing is all about compiler tuples which are
unfortunately named the same but totally separate to library
tuples.

I surely can distinguish a language construct from an aggregate
type defined in the library. It's not about the library type! I
wasn't fooled by the nomenclature, you were.

You all (maybe with exception of Timon Gehr) got me all wrong. My
alias-this. I used std.typecons.Tuple as a well known example of
an aggregate type with an alias-this to a tuple (tuple in the
sense you're discussing all the time). See the reference now?

Consider:

struct S(Ts...)
{
Ts variables;
alias variables this;
}

void f(int, long);

auto obj = S!(int, long)(1, 2L);

Now, per the DIP, will f( (obj+5)... ) rewrite to
f(obj.variables[0]+5, obj.variables[1]+5)) by going through S's
alias variables this or would it just ignore alias this
altogether?

Is it better formulated, now that I'm not using an example quite
many people here are familiar with?

May 11
Steven Schveighoffer <schveiguy gmail.com> writes:
On 5/11/20 3:46 PM, Q. Schroll wrote:
On Friday, 8 May 2020 at 21:18:26 UTC, Adam D. Ruppe wrote:
On Friday, 8 May 2020 at 20:53:39 UTC, Q. Schroll wrote:
alias Tup = Tuple!(int, long);

This is just a struct, the ... shouldn't do anything to it (probably
should be a syntax error).

It surely is a struct, but it's a struct with an alias-this to a tuple
of the kind you're discussing. That's what my question was about.

The ... thing is all about compiler tuples which are unfortunately
named the same but totally separate to library tuples.

I surely can distinguish a language construct from an aggregate type
defined in the library. It's not about the library type! I wasn't fooled
by the nomenclature, you were.

You all (maybe with exception of Timon Gehr) got me all wrong. My
alias-this. I used std.typecons.Tuple as a well known example of an
aggregate type with an alias-this to a tuple (tuple in the sense you're
discussing all the time). See the reference now?

Consider:

struct S(Ts...)
{
Ts variables;
alias variables this;
}

void f(int, long);

auto obj = S!(int, long)(1, 2L);

Now, per the DIP, will f( (obj+5)... ) rewrite to
f(obj.variables[0]+5, obj.variables[1]+5)) by going through S's alias
variables this or would it just ignore alias this altogether?

Is it better formulated, now that I'm not using an example quite many
people here are familiar with?

I think the answer is no.

I think you will have to be explicit in the reference to the variables:

f((obj.variables + 5)...)

This should also work:

f((anyOldStruct.tupleof + 5)...)

and this:

f((someStdTypeconsTuple.expand + 5)...)

There are many expressions that *result* in compiler tuples that would
NOT be expanded. e.g. template instantiations and the like. I think only
symbols and __traits expressions should be expanded, and not via alias this.

This is something Manu and I discussed on slack, and I think he was
going to update the DIP to reflect on how this would work (he may have
`