www.digitalmars.com         C & C++   DMDScript  

digitalmars.D - The Strange Loop conference

reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
I'm back from the Strange Loop conference. It's been an interesting 
experience. The audience was very diverse, with interest in everything 
functional (you'd only need to say "Haskell" or "monad" to get positive 
aahs), Web/cloud/tablet stuff, concurrent and distributed systems. 
Mentioning C++ non-mockingly was all but a faux pas; languages like 
Scala, Haskell, Clojure, and Erlang got attention. The quality of talks 
has been very variable, and unfortunately the 
over-confident-but-clueless-speaker stereotype was well represented.

I gave a talk there 
(https://thestrangeloop.com/sessions/generic-programming-galore-using-d) 
which has enjoyed moderate audience and success. I uploaded the slides 
at http://erdani.com/d/generic-programming-galore.pdf and the video may 
be available soon.

There was a very strong interest in D's CTFE abilities, which we should 
consider a strategic direction going forward. Also finalizing D's 
concurrency language and library infrastructure is essential. These two 
features are key differentiators. Also, capitalizing on D's functional 
features (which are e.g. better than Scala's but less known) would add 
good value.


Thanks,

Andrei
Sep 21 2011
next sibling parent reply bearophile <bearophileHUGS lycos.com> writes:
Andrei Alexandrescu:

 which has enjoyed moderate audience and success. I uploaded the slides 
 at http://erdani.com/d/generic-programming-galore.pdf and the video may 
 be available soon.

In future talks I suggest to show some downsides too, like explaining how much memory uses the D RE engine, how hard are to write, read, modify and debug text-based mixins like the bitfields, etc. In my opinion if you only show the upsides of something, probably even average programmers become suspicious. Bye, bearophile
Sep 21 2011
next sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 9/21/11 12:59 PM, bearophile wrote:
 Andrei Alexandrescu:

 which has enjoyed moderate audience and success. I uploaded the
 slides at http://erdani.com/d/generic-programming-galore.pdf and
 the video may be available soon.

In future talks I suggest to show some downsides too, like explaining how much memory uses the D RE engine, how hard are to write, read, modify and debug text-based mixins like the bitfields, etc.

I did mention the downsides in the talk: memory consumption during CTFE and the compiler's inability to explain why a template didn't match. Andrei
Sep 21 2011
parent Don <nospam nospam.com> writes:
On 21.09.2011 22:32, Sean Kelly wrote:
 On Sep 21, 2011, at 12:59 PM, Andrei Alexandrescu wrote:

 On 9/21/11 12:59 PM, bearophile wrote:
 Andrei Alexandrescu:

 which has enjoyed moderate audience and success. I uploaded the
 slides at http://erdani.com/d/generic-programming-galore.pdf and
 the video may be available soon.

In future talks I suggest to show some downsides too, like explaining how much memory uses the D RE engine, how hard are to write, read, modify and debug text-based mixins like the bitfields, etc.

I did mention the downsides in the talk: memory consumption during CTFE and the compiler's inability to explain why a template didn't match.

If DMD cleaned up after itself, the memory consumption issue would be far less significant though. Perhaps DMD could be fixed up to the point where GC could be enabled? I recall the code being there, but that there were issues with turning it on.

The CTFE memory consumption and slowness is not a gc issue. It's copy-on-write: http://d.puremagic.com/issues/show_bug.cgi?id=6498 I know how to fix this.
Sep 21 2011
prev sibling parent Sean Kelly <sean invisibleduck.org> writes:
On Sep 21, 2011, at 12:59 PM, Andrei Alexandrescu wrote:

 On 9/21/11 12:59 PM, bearophile wrote:
 Andrei Alexandrescu:
=20
 which has enjoyed moderate audience and success. I uploaded the
 slides at http://erdani.com/d/generic-programming-galore.pdf and
 the video may be available soon.

In future talks I suggest to show some downsides too, like explaining how much memory uses the D RE engine, how hard are to write, read, modify and debug text-based mixins like the bitfields, etc.

I did mention the downsides in the talk: memory consumption during =

match. If DMD cleaned up after itself, the memory consumption issue would be = far less significant though. Perhaps DMD could be fixed up to the point = where GC could be enabled? I recall the code being there, but that = there were issues with turning it on.=
Sep 21 2011
prev sibling next sibling parent reply Timon Gehr <timon.gehr gmx.ch> writes:
On 09/21/2011 06:55 PM, Andrei Alexandrescu wrote:
 I'm back from the Strange Loop conference. It's been an interesting
 experience. The audience was very diverse, with interest in everything
 functional (you'd only need to say "Haskell" or "monad" to get positive
 aahs), Web/cloud/tablet stuff, concurrent and distributed systems.
 Mentioning C++ non-mockingly was all but a faux pas; languages like
 Scala, Haskell, Clojure, and Erlang got attention. The quality of talks
 has been very variable, and unfortunately the
 over-confident-but-clueless-speaker stereotype was well represented.

 I gave a talk there
 (https://thestrangeloop.com/sessions/generic-programming-galore-using-d)
 which has enjoyed moderate audience and success. I uploaded the slides
 at http://erdani.com/d/generic-programming-galore.pdf and the video may
 be available soon.

I am looking forward to it!
 There was a very strong interest in D's CTFE abilities, which we should
 consider a strategic direction going forward.

One milestone would be to make DMD GC enabled. CTFE string manipulation sucks up too much memory.
 Also finalizing D's
 concurrency language and library infrastructure is essential. These two
 features are key differentiators. Also, capitalizing on D's functional
 features (which are e.g. better than Scala's but less known) would add
 good value.

Where D still loses when compared to Scala is functional code syntax: val newcol = collection filter {x=>x<5} map {x=>2*x} or maybe val newcol = for(x<-collection if(x<5)) yield 2*x vs auto newrange = filter!((x){return x<5;})(map!((x){return 2*x;})(range)); I believe that is part of the reason why it is less known. If I write functional style code in D I unfortunately usually get feedback that it is "extremely hard to read". Furthermore, Scala has built-in tuples, and eg delimited continuations, which enable some more functional programming idioms. std.algorithm/std.range are also considerably (and as I understand, deliberately) underpowered for FP when compared to eg Haskell's standard set of libraries, and writing a good range from scratch is usually a serious undertaking. Another big advantage that Scala has is that it supports pattern matching. It is usually about the first feature functional programmers care about. What do you think is the biggest advantage that Ds functional programming has in comparison to Scala's?
Sep 21 2011
next sibling parent Timon Gehr <timon.gehr gmx.ch> writes:
On 09/21/2011 09:44 PM, Andrej Mitrovic wrote:
 What is the purpose of this ternary operator on slide 16?
 static if (is(typeof(1 ? T[0].init : T[1].init) U))

This does the bulk of the work to get the combined type of T[0] and T[1]. It basically just asks the compiler what the combined type should be. It works because the result type of the ternary operator is the combined type of it's second and third arguments.
Sep 21 2011
prev sibling next sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 9/21/11 2:29 PM, Timon Gehr wrote:
 On 09/21/2011 06:55 PM, Andrei Alexandrescu wrote:
 I'm back from the Strange Loop conference. It's been an interesting
 experience. The audience was very diverse, with interest in everything
 functional (you'd only need to say "Haskell" or "monad" to get positive
 aahs), Web/cloud/tablet stuff, concurrent and distributed systems.
 Mentioning C++ non-mockingly was all but a faux pas; languages like
 Scala, Haskell, Clojure, and Erlang got attention. The quality of talks
 has been very variable, and unfortunately the
 over-confident-but-clueless-speaker stereotype was well represented.

 I gave a talk there
 (https://thestrangeloop.com/sessions/generic-programming-galore-using-d)
 which has enjoyed moderate audience and success. I uploaded the slides
 at http://erdani.com/d/generic-programming-galore.pdf and the video may
 be available soon.

I am looking forward to it!
 There was a very strong interest in D's CTFE abilities, which we should
 consider a strategic direction going forward.

One milestone would be to make DMD GC enabled. CTFE string manipulation sucks up too much memory.
 Also finalizing D's
 concurrency language and library infrastructure is essential. These two
 features are key differentiators. Also, capitalizing on D's functional
 features (which are e.g. better than Scala's but less known) would add
 good value.

Where D still loses when compared to Scala is functional code syntax: val newcol = collection filter {x=>x<5} map {x=>2*x}

Yoda this wrote.
 or maybe

 val newcol = for(x<-collection if(x<5)) yield 2*x

Too baroque.
 vs

 auto newrange = filter!((x){return x<5;})(map!((x){return 2*x;})(range));

auto newrange = filter!"a<5"(map!"2*a"(range)); At first some people get the heebiejeebies when seeing string-based lambda and the implicit naming convention for unary and binary functions. More importantly, there's the disadvantage you can't access local functions inside a string lambda. We should probably add a specialized lambda syntax.
 I believe that is part of the reason why it is less known. If I write
 functional style code in D I unfortunately usually get feedback that it
 is "extremely hard to read". Furthermore, Scala has built-in tuples, and
 eg delimited continuations, which enable some more functional
 programming idioms. std.algorithm/std.range are also considerably (and
 as I understand, deliberately) underpowered for FP when compared to eg
 Haskell's standard set of libraries, and writing a good range from
 scratch is usually a serious undertaking.

I wonder what Scala/Haskell idioms we should borrow for D.
 Another big advantage that Scala has is that it supports pattern
 matching. It is usually about the first feature functional programmers
 care about.

 What do you think is the biggest advantage that Ds functional
 programming has in comparison to Scala's?

1. True immutability 2. True purity 3. Pass by alias For a language that has so firmly branded itself as functional, Scala has a surprisingly poor handling of immutability and referential transparency aka purity. My understanding is that Scala's immutability merely consists of final values and a web of conventions. (Please correct me if I'm wrong.) Andrei
Sep 21 2011
next sibling parent reply Timon Gehr <timon.gehr gmx.ch> writes:
On 09/21/2011 10:20 PM, Andrei Alexandrescu wrote:
 On 9/21/11 2:29 PM, Timon Gehr wrote:
 On 09/21/2011 06:55 PM, Andrei Alexandrescu wrote:
 I'm back from the Strange Loop conference. It's been an interesting
 experience. The audience was very diverse, with interest in everything
 functional (you'd only need to say "Haskell" or "monad" to get positive
 aahs), Web/cloud/tablet stuff, concurrent and distributed systems.
 Mentioning C++ non-mockingly was all but a faux pas; languages like
 Scala, Haskell, Clojure, and Erlang got attention. The quality of talks
 has been very variable, and unfortunately the
 over-confident-but-clueless-speaker stereotype was well represented.

 I gave a talk there
 (https://thestrangeloop.com/sessions/generic-programming-galore-using-d)
 which has enjoyed moderate audience and success. I uploaded the slides
 at http://erdani.com/d/generic-programming-galore.pdf and the video may
 be available soon.

I am looking forward to it!
 There was a very strong interest in D's CTFE abilities, which we should
 consider a strategic direction going forward.

One milestone would be to make DMD GC enabled. CTFE string manipulation sucks up too much memory.
 Also finalizing D's
 concurrency language and library infrastructure is essential. These two
 features are key differentiators. Also, capitalizing on D's functional
 features (which are e.g. better than Scala's but less known) would add
 good value.

Where D still loses when compared to Scala is functional code syntax: val newcol = collection filter {x=>x<5} map {x=>2*x}

Yoda this wrote.

Actually it is the order in which it will (conceptually at least) be executed =). D can do similar things on arrays with UFCS.
 or maybe

 val newcol = for(x<-collection if(x<5)) yield 2*x

Too baroque.
 vs

 auto newrange = filter!((x){return x<5;})(map!((x){return 2*x;})(range));

auto newrange = filter!"a<5"(map!"2*a"(range)); At first some people get the heebiejeebies when seeing string-based lambda and the implicit naming convention for unary and binary functions.

Yes, I have seen it happen multiple times. There are even people who consider strings-as-code an evil concept entirely :o). How are CTFE and string mixins best introduced to people who think they might be too lazy to bother about learning more about the language?
 More importantly, there's the disadvantage you can't access
 local functions inside a string lambda.

I often want to do that. We could replace the 5 and 2 constants by stack variables, and then the point would be made.
 We should probably add a
 specialized lambda syntax.

It would probably be a simple addition as it is restricted mostly to the parser. Like this? Or is there a better lambda syntax around? int u,v; auto newrange=map!(x=>u*x)(filter!(x=>v<x)(range)); auto newrange = range.filter!(x=>v<x).map!(x=>u*x);
 I believe that is part of the reason why it is less known. If I write
 functional style code in D I unfortunately usually get feedback that it
 is "extremely hard to read". Furthermore, Scala has built-in tuples, and
 eg delimited continuations, which enable some more functional
 programming idioms. std.algorithm/std.range are also considerably (and
 as I understand, deliberately) underpowered for FP when compared to eg
 Haskell's standard set of libraries, and writing a good range from
 scratch is usually a serious undertaking.

I wonder what Scala/Haskell idioms we should borrow for D.

In my opinion these are worth looking into: Haskell: This might not happen, but having a tiny little function with a literate name for most common tasks is very convenient. I think other Haskell idioms not already in D are hard to translate because the type system and execution model of the two languages are so different. I have managed to use D for haskell-lookalike toy examples nonetheless. (it leads to extremely obfuscated code because of many nested lambdas and a bug in the lazy storage class that needs to be worked around) Both: (Concise expression-based lambdas.) Pattern matching, if it can be incorporated in a good way. (dynamic) lazy evaluation. Scala has lazy val. It would play very nicely with Ds purity, but maybe suboptimally with immutable (for immutable structures, the lazy fields would have to be computed eagerly) Generators? (those are implicit in Haskell) Something like this example would automatically create a compile-time lazy range to compute a power set efficiently. auto powerset(T)(T[] x) { foreach(i;0..1<<x.length) { yield (size_t i){ while(i){ yield x[bsf(i)]; i &= i-1; } }(i); } } Equivalent code that spells out the nested structs with all the front/popFront/empty etc contains considerable boilerplate. I am not sure how powerful it could be, but it should be possible to automatically let the following example propagate random access iff fun is pure, based on the observation that it does not have state: auto map(alias fun, R)(R m){ foreach(x; m) yield unaryFun!fun(x); } The range concept is very important in D code, some syntactic sugar like this would help a lot.
 Another big advantage that Scala has is that it supports pattern
 matching. It is usually about the first feature functional programmers
 care about.

 What do you think is the biggest advantage that Ds functional
 programming has in comparison to Scala's?

1. True immutability 2. True purity

Compiler-enforced immutability and purity.
 3. Pass by alias

It is extremely nice how templates are automatically instantiated in the correct scope. The fact that delegates are constrained to one context pointer and that static function literals don't decay to delegates tends to be annoying though. (also, that there is no inference if a context pointer is needed)
 For a language that has so firmly branded itself as functional, Scala
 has a surprisingly poor handling of immutability and referential
 transparency aka purity. My understanding is that Scala's immutability
 merely consists of final values and a web of conventions. (Please
 correct me if I'm wrong.)

I think you are right. But in Scala, much of the language semantics are actually implemented in the library. The immutable collections could be considered truly immutable by some.
Sep 21 2011
parent reply bearophile <bearophileHUGS lycos.com> writes:
Peter Alexander:

 auto newrange = filter!"a<5"(map!"2*a"(range));
 
 At least I think that works??

It works: import std.stdio, std.range, std.algorithm; void main() { auto r = iota(20); auto n = filter!q{ a < 5 }(map!q{ 2 * a }(r)); writeln(n); }
 The problem it's simply intractable to do lazy computation in D while
 maintaining the 'correct' static typing.

Both Perl6 and Scala have a "lazy stream" that you use like Haskell lazy lists. It is possible to implement something similar to D, recently someone has shown some code.
 The problem with the D situation is that if I have a function:
 
 void foo(int[])

 then it can't be called with:
 
 foo(map!"2*a"([1, 2, 3]));
 
 I'm forced to either:
 (a) use eager computation by wrapping it in array(...)
 (b) change foo to be a template

This is why I have asked for functions like amap/afilter in Phobos, because in many situations in D you need an array instead of a lazy range.
 Of course, if every function that accepts a range becomes a template
 then you have the practical problem of managing the number of template
 instantiations. Code bloat is still a real problem with heavy template
 usage.

Haskell has used typeclasses to avoid this :-)
 Finally, map in Haskell just makes sense. You have a list of a's, a
 function of 'a to b' and you get a list of b's out the other end. It's
 simple, it's exactly what you want, and it just works. You can't say the
 same in D.

I agree. That has two good consequences: - It makes sense for the Haskell compiler too, so it is able to perform very complex optimizations that no D compiler today performs. - Being map so easy to write, to use and to understand, the programmer is able to write far more complex things. If writing a map requires a tens lines of complex D code, normal programmers usually don't want to write things much more complex (type-wise too) than a map.
 * I say simple cases because Haskell compiler can do deforestation
 optimisations, which are essentially intractable in D due to its
 imperative roots.

I think future D compilers will find some ways to further optimize D code that uses purity/immutability a lot. ----------------------- Andrei Alexandrescu:
 Where D still loses when compared to Scala is functional code syntax:

 val newcol = collection filter {x=>x<5} map {x=>2*x}

Yoda this wrote.
 or maybe

 val newcol = for(x<-collection if(x<5)) yield 2*x

Too baroque.
 vs

 auto newrange = filter!((x){return x<5;})(map!((x){return 2*x;})(range));

auto newrange = filter!"a<5"(map!"2*a"(range));

Python lazy iterators syntax wins over both languages:
 collection = xrange(20)
 [2 * x for x in collection if x < 5]  # eager



 newcol = (2 * x for x in collection if x < 5)  # lazy
 list(newcol)



 We should probably add a specialized lambda syntax.

Do you mean something like C# lambdas?
 I wonder what Scala/Haskell idioms we should borrow for D.

Haskell has several nice ideas, but I think it's not easy to copy them without changing D a lot. Still, being open toward this idea is an improvement for D designers :-) -------------------------------- Andrei Alexandrescu:
 You want Haskell capability with D performance. You can't have that.

We live in a world where JavaScript is sometimes only 3-4 times slower than D code, and where LuaJIT compiles dynamically typed Lua floating-point-heavy programs and runs them in a total amount of time lower than just running a binary produced by DMD. So be careful when you say something is impossible :-) In the works there is a dialect of Haskell meant for faster programs: http://www.haskell.org/haskellwiki/DDC
 D's map is superior to Haskell's. There is no contest.

Some aspects of D map are superior, and some aspects of Haskell map are superior to D ones.
 (In addition, Hakell's map forces ONE higher-order function
 representation, whereas D works with a function alias, function, or
 delegate.)

I think this is an advantage of Haskell. I think Haskell doesn't need those things, it's more uniform.
 D also allows things that are essentially impossible in Haskell, such as
quicksort.

The point of this discussion is to look for ways to improve D. bashing Haskell is off topic and bad. ----------------------- Timon Gehr:
 Pattern matching, if it can be incorporated in a good way.

There are ideas to improve switch a bit: http://d.puremagic.com/issues/show_bug.cgi?id=596
 Generators?

This is an idea: http://d.puremagic.com/issues/show_bug.cgi?id=5660
 The range concept is very important in D code, some syntactic sugar like
 this would help a lot.

Right. Bye, bearophile
Sep 21 2011
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 9/21/11 5:22 PM, bearophile wrote:
 This is why I have asked for functions like amap/afilter in Phobos,
 because in many situations in D you need an array instead of a lazy
 range.

I don't think that was being asked.
 Of course, if every function that accepts a range becomes a
 template then you have the practical problem of managing the number
 of template instantiations. Code bloat is still a real problem with
 heavy template usage.

Haskell has used typeclasses to avoid this :-)

At the conference I discussed typeclasses at length with a Haskell connoisseur and he had to agree that D's template constraints are a suitable replacement, and also that template constraints can express requirements that type classes have difficulties with. This is because D's template constraints can involve several types simultaneously and can do arbitrary computations. (That's also what makes them less structured than typeclasses.)
 Finally, map in Haskell just makes sense. You have a list of a's,
 a function of 'a to b' and you get a list of b's out the other end.
 It's simple, it's exactly what you want, and it just works. You
 can't say the same in D.

I agree. That has two good consequences: - It makes sense for the Haskell compiler too, so it is able to perform very complex optimizations that no D compiler today performs. - Being map so easy to write, to use and to understand, the programmer is able to write far more complex things. If writing a map requires a tens lines of complex D code, normal programmers usually don't want to write things much more complex (type-wise too) than a map.

I'm not sure how that assumption works. Implementing a regex engine is quite an endeavor, but that doesn't stop people from using it casually, and for higher-level things. On the Haskell compiler being "... able to perform very complex optimizations that no D compiler today performs" I call bullshit. The two languages have very different approaches to computation and choose right off the bat very different corners of the PL design space. Consequently, the challenges met by the two languages are very different and the approaches to overcome those challenges are also very different. Haskell starts with a computational model far removed from the reality of the computing fabric it works on. Therefore, it gains some in certain areas, but it loses some when it comes to efficiency. Naturally, it needs more sophisticated optimizations to overcome that handicap. That doesn't mean much in comparing Haskell to D or the quality of a Haskell compiler with that of a D compiler.
 Python lazy iterators syntax wins over both languages:

 collection = xrange(20) [2 * x for x in collection if x<  5]  #
 eager



 newcol = (2 * x for x in collection if x<  5)  # lazy
 list(newcol)




I guess I must act impressed.
 We should probably add a specialized lambda syntax.

Do you mean something like C# lambdas?

Per Walter's enumeration of syntax in various languages.
 I wonder what Scala/Haskell idioms we should borrow for D.

Haskell has several nice ideas, but I think it's not easy to copy them without changing D a lot. Still, being open toward this idea is an improvement for D designers :-)

Feel free to spare the patronizing part, it won't be missed.
 You want Haskell capability with D performance. You can't have
 that.

We live in a world where JavaScript is sometimes only 3-4 times slower than D code, and where LuaJIT compiles dynamically typed Lua floating-point-heavy programs and runs them in a total amount of time lower than just running a binary produced by DMD. So be careful when you say something is impossible :-) In the works there is a dialect of Haskell meant for faster programs: http://www.haskell.org/haskellwiki/DDC

That wasn't my point.
 D's map is superior to Haskell's. There is no contest.

Some aspects of D map are superior, and some aspects of Haskell map are superior to D ones.
 (In addition, Hakell's map forces ONE higher-order function
 representation, whereas D works with a function alias, function,
 or delegate.)

I think this is an advantage of Haskell. I think Haskell doesn't need those things, it's more uniform.

It's also slower. Per the other half of your other posts, efficiency is a huge concern to you.
 D also allows things that are essentially impossible in Haskell,
 such as quicksort.

The point of this discussion is to look for ways to improve D. bashing Haskell is off topic and bad.

Nobody's bashing Haskell. But we can't work from the assumption that everything in Haskell is non-critically good and has no downsides. Andrei
Sep 21 2011
next sibling parent bearophile <bearophileHUGS lycos.com> writes:
Andrei Alexandrescu:

 Feel free to spare the patronizing part, it won't be missed.

I am sorry. I have full respect for Walter's and yours work. Bye, bearophile
Sep 21 2011
prev sibling parent bearophile <bearophileHUGS lycos.com> writes:
Andrei Alexandrescu:

I have missed to answer a small part:

 On 9/21/11 5:22 PM, bearophile wrote:
 This is why I have asked for functions like amap/afilter in Phobos,
 because in many situations in D you need an array instead of a lazy
 range.

I don't think that was being asked.

I don't understand what you are saying me here. This is what I was referring to: http://d.puremagic.com/issues/show_bug.cgi?id=5756 http://www.digitalmars.com/webnews/newsgroups.php?art_group=digitalmars.D.learn&article_id=29516 Bye, bearophile
Sep 22 2011
prev sibling parent reply Jacob Carlborg <doob me.com> writes:
On 2011-09-21 22:20, Andrei Alexandrescu wrote:
 On 9/21/11 2:29 PM, Timon Gehr wrote:
 On 09/21/2011 06:55 PM, Andrei Alexandrescu wrote:
 I'm back from the Strange Loop conference. It's been an interesting
 experience. The audience was very diverse, with interest in everything
 functional (you'd only need to say "Haskell" or "monad" to get positive
 aahs), Web/cloud/tablet stuff, concurrent and distributed systems.
 Mentioning C++ non-mockingly was all but a faux pas; languages like
 Scala, Haskell, Clojure, and Erlang got attention. The quality of talks
 has been very variable, and unfortunately the
 over-confident-but-clueless-speaker stereotype was well represented.

 I gave a talk there
 (https://thestrangeloop.com/sessions/generic-programming-galore-using-d)
 which has enjoyed moderate audience and success. I uploaded the slides
 at http://erdani.com/d/generic-programming-galore.pdf and the video may
 be available soon.

I am looking forward to it!
 There was a very strong interest in D's CTFE abilities, which we should
 consider a strategic direction going forward.

One milestone would be to make DMD GC enabled. CTFE string manipulation sucks up too much memory.
 Also finalizing D's
 concurrency language and library infrastructure is essential. These two
 features are key differentiators. Also, capitalizing on D's functional
 features (which are e.g. better than Scala's but less known) would add
 good value.

Where D still loses when compared to Scala is functional code syntax: val newcol = collection filter {x=>x<5} map {x=>2*x}

Yoda this wrote.
 or maybe

 val newcol = for(x<-collection if(x<5)) yield 2*x

Too baroque.
 vs

 auto newrange = filter!((x){return x<5;})(map!((x){return 2*x;})(range));

auto newrange = filter!"a<5"(map!"2*a"(range)); At first some people get the heebiejeebies when seeing string-based lambda and the implicit naming convention for unary and binary functions. More importantly, there's the disadvantage you can't access local functions inside a string lambda. We should probably add a specialized lambda syntax.

I like this syntax: auto newRange = range.map(a => 2 * a).filter(a => a < 5); -- /Jacob Carlborg
Sep 22 2011
parent Jacob Carlborg <doob me.com> writes:
On 2011-09-23 18:05, Simen Kjaeraas wrote:
 On Thu, 22 Sep 2011 11:22:10 +0200, Jacob Carlborg <doob me.com> wrote:

 auto newrange = filter!"a<5"(map!"2*a"(range));

 At first some people get the heebiejeebies when seeing string-based
 lambda and the implicit naming convention for unary and binary
 functions. More importantly, there's the disadvantage you can't access
 local functions inside a string lambda. We should probably add a
 specialized lambda syntax.

I like this syntax: auto newRange = range.map(a => 2 * a).filter(a => a < 5);

I dislike this syntax, in that it seems to pass the delegates as normal function parameters, rather than template parameters. New version: auto newRange = range.map!(a => 2 * a).filter!(a => a < 5);

The important thing here was the lambda syntax, not if it's passed as a template parameter or a regular parameter. -- /Jacob Carlborg
Sep 23 2011
prev sibling next sibling parent reply Peter Alexander <peter.alexander.au gmail.com> writes:
On 21/09/11 8:29 PM, Timon Gehr wrote:
 Where D still loses when compared to Scala is functional code syntax:

 val newcol = collection filter {x=>x<5} map {x=>2*x}

 or maybe

 val newcol = for(x<-collection if(x<5)) yield 2*x

 vs

 auto newrange = filter!((x){return x<5;})(map!((x){return 2*x;})(range));

Of course, you could write: auto newrange = filter!"a<5"(map!"2*a"(range)); At least I think that works??
 I believe that is part of the reason why it is less known. If I write
 functional style code in D I unfortunately usually get feedback that it
 is "extremely hard to read". Furthermore, Scala has built-in tuples, and
 eg delimited continuations, which enable some more functional
 programming idioms. std.algorithm/std.range are also considerably (and
 as I understand, deliberately) underpowered for FP when compared to eg
 Haskell's standard set of libraries, and writing a good range from
 scratch is usually a serious undertaking.

The problem it's simply intractable to do lazy computation in D while maintaining the 'correct' static typing. For example, in Haskell, map (correctly) has the signature: map :: (a -> b) -> [a] -> [b] but in D, std.map has the signature (expressed in some Haskell/D pseudocode) map :: (a -> b) -> [a] -> Map!((a -> b), [a]) To get the same kind of signature as Haskell, you'd need to use eager computation in D, but that's horrendously inefficient in most cases. Alternatively, you can use interfaces, but then you lose type information in a lot of situations. The problem with the D situation is that if I have a function: void foo(int[]) then it can't be called with: foo(map!"2*a"([1, 2, 3])); I'm forced to either: (a) use eager computation by wrapping it in array(...) (b) change foo to be a template What if foo is a virtual member function? Those can't be templates. What if foo recursively maps to itself? e.g. auto foo(Range)(Range r) { if (r.length == 1) return r.front; r.popFront(); return map!"2*a"(r); } Someone in D.learn tried to write quickSort in a similar way, but it obviously doesn't work because of the infinite number of template instantiations. To simulate lazy computation in D, you require a type transformation, which doesn't work with recursion. You're only choice is to abstract away to some sort of IRange!T interface to remove the type divergence, but then you lose performance, and in some cases, type information. Of course, if every function that accepts a range becomes a template then you have the practical problem of managing the number of template instantiations. Code bloat is still a real problem with heavy template usage. Finally, map in Haskell just makes sense. You have a list of a's, a function of 'a to b' and you get a list of b's out the other end. It's simple, it's exactly what you want, and it just works. You can't say the same in D. Of course, what D's lazy ranges do give you is performance in the simple cases*. There's no indirect function calls, which means things can be inlined, and you save yourself a few potential cache misses. We just have to accept that this performance is at the expense of simplicity and expressability. * I say simple cases because Haskell compiler can do deforestation optimisations, which are essentially intractable in D due to its imperative roots.
Sep 21 2011
next sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 9/21/11 3:34 PM, Peter Alexander wrote:
 The problem it's simply intractable to do lazy computation in D while
 maintaining the 'correct' static typing.

 For example, in Haskell, map (correctly) has the signature:

 map :: (a -> b) -> [a] -> [b]

 but in D, std.map has the signature (expressed in some Haskell/D
 pseudocode)

 map :: (a -> b) -> [a] -> Map!((a -> b), [a])

I think it really is map :: (a -> b) -> Range1!a -> Map!((a -> b), Range1!a) i.e. the type of the input range is not fixed to be a built-in list. That makes D's map more general than Haskell's, but also more difficult to implement. For example, Haskell's implementation does not need to worry about O(1) access to the nth element of the result.
 To get the same kind of signature as Haskell, you'd need to use eager
 computation in D, but that's horrendously inefficient in most cases.
 Alternatively, you can use interfaces, but then you lose type
 information in a lot of situations.

 The problem with the D situation is that if I have a function:

 void foo(int[])

 then it can't be called with:

 foo(map!"2*a"([1, 2, 3]));

 I'm forced to either:
 (a) use eager computation by wrapping it in array(...)
 (b) change foo to be a template

This is exactly as it should be. For whatever reason, foo wants an array. For a good reason, map returns a lazy sequence that depends upon the type of its input. You necessarily need to force computation into an array because foo needs an array and because D arrays are eager. This is tantamount to saying that D is not as good as Haskell at lazy computation. Fine - lazy computation is Haskell's turf.
 What if foo is a virtual member function? Those can't be templates.

Then it would need to take a dynamic Range as parameter, parameterized with the element type.
 What if foo recursively maps to itself? e.g.

 auto foo(Range)(Range r)
 {
 if (r.length == 1)
 return r.front;
 r.popFront();
 return map!"2*a"(r);
 }

Hm, this is not recursive (and should work).
 Someone in D.learn tried to write quickSort in a similar way, but it
 obviously doesn't work because of the infinite number of template
 instantiations. To simulate lazy computation in D, you require a type
 transformation, which doesn't work with recursion. You're only choice is
 to abstract away to some sort of IRange!T interface to remove the type
 divergence, but then you lose performance, and in some cases, type
 information.

But Haskell has lost the performance argument right off the bat. You are mixing tradeoffs here. There are pretty immutable rules of what can and cannot be done, many of which are consequences of a handful of simple facts. You want Haskell capability with D performance. You can't have that.
 Of course, if every function that accepts a range becomes a template
 then you have the practical problem of managing the number of template
 instantiations. Code bloat is still a real problem with heavy template
 usage.

 Finally, map in Haskell just makes sense. You have a list of a's, a
 function of 'a to b' and you get a list of b's out the other end. It's
 simple, it's exactly what you want, and it just works. You can't say the
 same in D.

D's map is superior to Haskell's. There is no contest. Again, Haskell's map forces only ONE range abstractions to fit everybody. D's map allows anyone to choose their own map abstraction, and is clever enough to define another map abstraction based on it that offers maximum capability and maximum efficiency. (In addition, Hakell's map forces ONE higher-order function representation, whereas D works with a function alias, function, or delegate.)
 Of course, what D's lazy ranges do give you is performance in the simple
 cases*. There's no indirect function calls, which means things can be
 inlined, and you save yourself a few potential cache misses. We just
 have to accept that this performance is at the expense of simplicity and
 expressability.

You could have indirect calls in D.
 * I say simple cases because Haskell compiler can do deforestation
 optimisations, which are essentially intractable in D due to its
 imperative roots.

D also allows things that are essentially impossible in Haskell, such as quicksort. Andrei
Sep 21 2011
next sibling parent reply Timon Gehr <timon.gehr gmx.ch> writes:
On 09/21/2011 11:11 PM, Andrei Alexandrescu wrote:
 On 9/21/11 3:34 PM, Peter Alexander wrote:
 What if foo is a virtual member function? Those can't be templates.

Then it would need to take a dynamic Range as parameter, parameterized with the element type.

Would it be a good idea to allow this? to!Interface(structInstance); This would work if Struct implicitly fulfills the interface and return a class instance that is constructed on the fly and implements the interface by forwarding to the struct instance. foo(to!IInputRange(map!(...)(...)));
Sep 21 2011
parent Dmitry Olshansky <dmitry.olsh gmail.com> writes:
On 22.09.2011 1:53, Timon Gehr wrote:
 On 09/21/2011 11:11 PM, Andrei Alexandrescu wrote:
 On 9/21/11 3:34 PM, Peter Alexander wrote:
 What if foo is a virtual member function? Those can't be templates.

Then it would need to take a dynamic Range as parameter, parameterized with the element type.

Would it be a good idea to allow this? to!Interface(structInstance); This would work if Struct implicitly fulfills the interface and return a class instance that is constructed on the fly and implements the interface by forwarding to the struct instance. foo(to!IInputRange(map!(...)(...)));

There is an adaptTo already in Phobos that does something like that. So all what's left is screwing some bolts in 'to'. -- Dmitry Olshansky
Sep 22 2011
prev sibling parent Peter Alexander <peter.alexander.au gmail.com> writes:
On 21/09/11 10:11 PM, Andrei Alexandrescu wrote:
 On 9/21/11 3:34 PM, Peter Alexander wrote:
 The problem it's simply intractable to do lazy computation in D while
 maintaining the 'correct' static typing.

 For example, in Haskell, map (correctly) has the signature:

 map :: (a -> b) -> [a] -> [b]

 but in D, std.map has the signature (expressed in some Haskell/D
 pseudocode)

 map :: (a -> b) -> [a] -> Map!((a -> b), [a])

I think it really is map :: (a -> b) -> Range1!a -> Map!((a -> b), Range1!a) i.e. the type of the input range is not fixed to be a built-in list. That makes D's map more general than Haskell's, but also more difficult to implement. For example, Haskell's implementation does not need to worry about O(1) access to the nth element of the result.

True, I oversimplified it.
 To get the same kind of signature as Haskell, you'd need to use eager
 computation in D, but that's horrendously inefficient in most cases.
 Alternatively, you can use interfaces, but then you lose type
 information in a lot of situations.

 The problem with the D situation is that if I have a function:

 void foo(int[])

 then it can't be called with:

 foo(map!"2*a"([1, 2, 3]));

 I'm forced to either:
 (a) use eager computation by wrapping it in array(...)
 (b) change foo to be a template

This is exactly as it should be. For whatever reason, foo wants an array. For a good reason, map returns a lazy sequence that depends upon the type of its input. You necessarily need to force computation into an array because foo needs an array and because D arrays are eager. This is tantamount to saying that D is not as good as Haskell at lazy computation. Fine - lazy computation is Haskell's turf.

Well yes, but that's the whole problem: std.algorithm and std.range attempt to act like lazy, functional programming is easy and expressive in D, but it's not, except in small self-contained examples.
 What if foo is a virtual member function? Those can't be templates.

Then it would need to take a dynamic Range as parameter, parameterized with the element type.
 What if foo recursively maps to itself? e.g.

 auto foo(Range)(Range r)
 {
 if (r.length == 1)
 return r.front;
 r.popFront();
 return map!"2*a"(r);
 }

Hm, this is not recursive (and should work).

Apologies, the last line should read: return foo(map!"2*a"(r));
 Someone in D.learn tried to write quickSort in a similar way, but it
 obviously doesn't work because of the infinite number of template
 instantiations. To simulate lazy computation in D, you require a type
 transformation, which doesn't work with recursion. You're only choice is
 to abstract away to some sort of IRange!T interface to remove the type
 divergence, but then you lose performance, and in some cases, type
 information.

But Haskell has lost the performance argument right off the bat.

Maybe. As I mentioned later on, Haskell can do deforestation optimisations (among many other things) that D cannot do due to Haskell's overall design (absolute purity). I'm curious if you have benchmarks to back it up (I don't, but I'm not making the claim). In D it's easy to predict performance because there's a (relatively) simple mapping of D code to machine code. The same isn't true of Haskell.
 You are mixing tradeoffs here. There are pretty immutable rules of what
 can and cannot be done, many of which are consequences of a handful of
 simple facts. You want Haskell capability with D performance. You can't
 have that.

Well, as much as I would like that, I'm not arguing for it. My post was in reply to Timothy saying: "If I write functional style code in D I unfortunately usually get feedback that it is 'extremely hard to read'." and "std.algorithm/std.range are also considerably (and as I understand, deliberately) underpowered for FP when compared to eg Haskell's standard set of libraries" I understand the tradeoffs D makes, I'm just agreeing and elaborating on what Tim said: Phobos' functional offering is underpowered and much more difficult to understand compared to Haskell's. As you said, Haskell is more capable than D in this area.
 Of course, if every function that accepts a range becomes a template
 then you have the practical problem of managing the number of template
 instantiations. Code bloat is still a real problem with heavy template
 usage.

 Finally, map in Haskell just makes sense. You have a list of a's, a
 function of 'a to b' and you get a list of b's out the other end. It's
 simple, it's exactly what you want, and it just works. You can't say the
 same in D.

D's map is superior to Haskell's. There is no contest.

It's only superior if you measure superiority on two very specific properties: - Applicability to different data structures - Performance in simple cases There are other important properties of Haskell's map that I'm sure Haskellite's would use to argue that theirs is superior: - Simplicity (a couple of lines of code to define with only basic language features) - Gracefully works everywhere (don't need to change style or interface for virtual functions or recursive functions - it just works)
 Again, Haskell's map forces only ONE range abstractions to fit
 everybody. D's map allows anyone to choose their own map abstraction,
 and is clever enough to define another map abstraction based on it that
 offers maximum capability and maximum efficiency.

 (In addition, Hakell's map forces ONE higher-order function
 representation, whereas D works with a function alias, function, or
 delegate.)

Yes, D wins here. To summarise my position: I understand there are tradeoffs. Neither Haskell or D is better on every axis of comparison. D's functional offering probably has better performance, and definitely has more applicability to different data structures, no question. On the other hand, Haskell is easier to read, write and understand, and is generally more expressive if you don't mind being limited to one data structure.
Sep 21 2011
prev sibling parent reply "Nick Sabalausky" <a a.a> writes:
"Peter Alexander" <peter.alexander.au gmail.com> wrote in message 
news:j5dhe0$2ln6$1 digitalmars.com...
 For example, in Haskell, map (correctly) has the signature:

 map :: (a -> b) -> [a] -> [b]

 but in D, std.map has the signature (expressed in some Haskell/D 
 pseudocode)

 map :: (a -> b) -> [a] -> Map!((a -> b), [a])

That actually brings up something I've found kind of interesting about D's map and other similar functions. They return a type that's specially-tasked as "the return type for function {whatever}". So, in a way, it seems almost like saying: ReturnTypeOf!(foo) foo() {...} Ie, "The return type of this function is defined to be the type this function returns." Of course, I realize that's not actually what's going on.
Sep 22 2011
parent reply travert phare.normalesup.org (Christophe) writes:
"Nick Sabalausky" , dans le message (digitalmars.D:145002), a écrit :
 For example, in Haskell, map (correctly) has the signature:

 map :: (a -> b) -> [a] -> [b]

 but in D, std.map has the signature (expressed in some Haskell/D 
 pseudocode)

 map :: (a -> b) -> [a] -> Map!((a -> b), [a])


except that [a] should be a D Range, and that Map is a D Range too. So I'd go for: map :: (a -> b) -> Range!A -> Range!B oh, that's quite close to Haskell's map... Let me try the other game: what is Haskell's signature ? I need to introduce Haskell's type to D, so: // all types are lazy pure: template Haskell(T) { alias pure T function() Haskell; } // Haskell's list: template HaskellR(T) { alias pure Tuple!(Haskell!A, HaskellR!T) function() HaskellR; } // Then the signature of map is: HaskellR!B map(A,B)(Haskell!B function(Haskell!A), HaskellR!A); Experienced Haskell's users may correct me. All functions must be pure, and those pure function, and a lot of optimization comes from the memoization of those functions. Haskell!T could be rewritten to take into account this memoization. I wonder what efficiency would we get compared to Haskell with a library based on this kind of stuff in D. We would probably miss many optimization opportunities that Haskell is tuned to find out, but it must be fun to try that out. -- Christophe
Sep 22 2011
parent reply Timon Gehr <timon.gehr gmx.ch> writes:
On 09/22/2011 02:29 PM, Christophe wrote:
 "Nick Sabalausky" , dans le message (digitalmars.D:145002), a écrit :
 For example, in Haskell, map (correctly) has the signature:

 map :: (a ->  b) ->  [a] ->  [b]

 but in D, std.map has the signature (expressed in some Haskell/D
 pseudocode)

 map :: (a ->  b) ->  [a] ->  Map!((a ->  b), [a])


except that [a] should be a D Range, and that Map is a D Range too. So I'd go for: map :: (a -> b) -> Range!A -> Range!B oh, that's quite close to Haskell's map... Let me try the other game: what is Haskell's signature ? I need to introduce Haskell's type to D, so: // all types are lazy pure: template Haskell(T) { alias pure T function() Haskell; } // Haskell's list: template HaskellR(T) { alias pure Tuple!(Haskell!A, HaskellR!T) function() HaskellR; } // Then the signature of map is: HaskellR!B map(A,B)(Haskell!B function(Haskell!A), HaskellR!A); Experienced Haskell's users may correct me. All functions must be pure, and those pure function, and a lot of optimization comes from the memoization of those functions. Haskell!T could be rewritten to take into account this memoization. I wonder what efficiency would we get compared to Haskell with a library based on this kind of stuff in D. We would probably miss many optimization opportunities that Haskell is tuned to find out, but it must be fun to try that out.

I did: http://pastebin.com/2rEdx0RD port of some haskell code from RosettaCode (run it ~20 min to get results, haskell runs in less than 2s) http://pastebin.com/Vx4hXvaT Output of the second example is screwed up with the latest release though, I don't know why, but converting BigNums to string is a PITA anyways... Has there been a regression in the std.bigint.BigInt.toString code? The main performance issue is the garbage collector. Turning it off will help performance a lot if you have enough RAM. Furthermore, DMD does not do any advanced optimizations
Sep 22 2011
next sibling parent bearophile <bearophileHUGS lycos.com> writes:
Timon Gehr:

 Furthermore, DMD does not do any advanced optimizations

In Haskell there are ways to add library-defined optimizations, with rewrite rules: http://www.haskell.org/haskellwiki/Playing_by_the_rules http://www.haskell.org/haskellwiki/GHC/Using_rules There are also papers about this. Some people are discussing how add library-defined error messages in Haskell, to reduce the big problems caused by complex type error messages. Maybe similar solutions will be the way to solve similar optimization problems in future D. Bye, bearophile
Sep 22 2011
prev sibling parent reply travert phare.normalesup.org (Christophe) writes:
Timon Gehr , dans le message (digitalmars.D:145067), a écrit :
 I did: http://pastebin.com/2rEdx0RD

Nice!
 port of some haskell code from RosettaCode (run it ~20 min to get 
 results, haskell runs in less than 2s)
 http://pastebin.com/Vx4hXvaT

 The main performance issue is the garbage collector. Turning it off will 
 help performance a lot if you have enough RAM. Furthermore, DMD does not 
 do any advanced optimizations

Yes, we would certainly get a lot more performance with a specialized memory usage. Shuting off the GC and freeing manually delegate context when they are computed may help a lot. We may also improve by using more CTFE, and less Laziness (LList for instance is a bit too much lazy). The code must stop everytimes it gets a Lazy objet just to know if it must compute the value or just use it. We must use the processor while it has to wait. Multiple thread can help, but I guess Haskell compliers have much better tools to do that. I doubt we can reach a descent fraction of Haskell's speed, but it sounds a fun thing to do, so I may give it a try. That would be good for my optimization skills :) -- Christophe
Sep 23 2011
parent Timon Gehr <timon.gehr gmx.ch> writes:
On 09/23/2011 10:07 AM, Christophe wrote:
 Timon Gehr , dans le message (digitalmars.D:145067), a écrit :
 I did: http://pastebin.com/2rEdx0RD

Nice!
 port of some haskell code from RosettaCode (run it ~20 min to get
 results, haskell runs in less than 2s)
 http://pastebin.com/Vx4hXvaT

 The main performance issue is the garbage collector. Turning it off will
 help performance a lot if you have enough RAM. Furthermore, DMD does not
 do any advanced optimizations

Yes, we would certainly get a lot more performance with a specialized memory usage. Shuting off the GC and freeing manually delegate context when they are computed may help a lot.

The trouble is that multiple delegates could share (part of) the same context.
Sep 23 2011
prev sibling next sibling parent Sean Kelly <sean invisibleduck.org> writes:
You might wan to play with GC.reserve.=20

Sent from my iPhone

On Sep 22, 2011, at 4:35 PM, Timon Gehr <timon.gehr gmx.ch> wrote:

 On 09/22/2011 02:29 PM, Christophe wrote:
 "Nick Sabalausky" , dans le message (digitalmars.D:145002), a =C3=A9crit :=


 For example, in Haskell, map (correctly) has the signature:
=20
 map :: (a ->  b) ->  [a] ->  [b]
=20
 but in D, std.map has the signature (expressed in some Haskell/D
 pseudocode)
=20
 map :: (a ->  b) ->  [a] ->  Map!((a ->  b), [a])


except that [a] should be a D Range, and that Map is a D Range too. So I'd go for: =20 map :: (a -> b) -> Range!A -> Range!B =20 oh, that's quite close to Haskell's map... =20 =20 Let me try the other game: what is Haskell's signature ? I need to introduce Haskell's type to D, so: =20 // all types are lazy pure: template Haskell(T) { alias pure T function() Haskell; } =20 // Haskell's list: template HaskellR(T) { alias pure Tuple!(Haskell!A, HaskellR!T) function() HaskellR; } =20 // Then the signature of map is: HaskellR!B map(A,B)(Haskell!B function(Haskell!A), HaskellR!A); =20 Experienced Haskell's users may correct me. =20 All functions must be pure, and those pure function, and a lot of optimization comes from the memoization of those functions. Haskell!T could be rewritten to take into account this memoization. =20 I wonder what efficiency would we get compared to Haskell with a library based on this kind of stuff in D. We would probably miss many optimization opportunities that Haskell is tuned to find out, but it must be fun to try that out. =20

I did: http://pastebin.com/2rEdx0RD =20 port of some haskell code from RosettaCode (run it ~20 min to get results,=

 http://pastebin.com/Vx4hXvaT
=20
 Output of the second example is screwed up with the latest release though,=

there been a regression in the std.bigint.BigInt.toString code?
=20
=20
 The main performance issue is the garbage collector. Turning it off will h=

ny advanced optimizations
=20
=20
=20

Sep 22 2011
prev sibling parent "Simen Kjaeraas" <simen.kjaras gmail.com> writes:
On Thu, 22 Sep 2011 11:22:10 +0200, Jacob Carlborg <doob me.com> wrote:

 auto newrange = filter!"a<5"(map!"2*a"(range));

 At first some people get the heebiejeebies when seeing string-based
 lambda and the implicit naming convention for unary and binary
 functions. More importantly, there's the disadvantage you can't access
 local functions inside a string lambda. We should probably add a
 specialized lambda syntax.

I like this syntax: auto newRange = range.map(a => 2 * a).filter(a => a < 5);

I dislike this syntax, in that it seems to pass the delegates as normal function parameters, rather than template parameters. New version: auto newRange = range.map!(a => 2 * a).filter!(a => a < 5); -- Simen
Sep 23 2011
prev sibling parent Andrej Mitrovic <andrej.mitrovich gmail.com> writes:
What is the purpose of this ternary operator on slide 16?
static if (is(typeof(1 ? T[0].init : T[1].init) U))
Sep 21 2011