www.digitalmars.com         C & C++   DMDScript  

digitalmars.D - Bartosz Milewski seems to like D more than C++ now :)

reply "Szymon Gatner" <noemail gmail.com> writes:
I had similar thoughts when watching GoingNaive 2013:
http://bartoszmilewski.com/2013/09/19/edward-chands/
I was more and more scared with every talk and now I am 
valualizing my polymorphic types a'la Sean Parent
Sep 19 2013
next sibling parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 9/19/13 3:18 PM, Szymon Gatner wrote:
 I had similar thoughts when watching GoingNaive 2013:
 http://bartoszmilewski.com/2013/09/19/edward-chands/

Nice piece.
 I was more and more scared with every talk and now I am valualizing my
 polymorphic types a'la Sean Parent

That I think is sketchy advice. Andrei
Sep 19 2013
prev sibling next sibling parent "Szymon Gatner" <noemail gmail.com> writes:
On Thursday, 19 September 2013 at 22:46:09 UTC, Andrei 
Alexandrescu wrote:
 On 9/19/13 3:18 PM, Szymon Gatner wrote:
 I had similar thoughts when watching GoingNaive 2013:
 http://bartoszmilewski.com/2013/09/19/edward-chands/

Nice piece.
 I was more and more scared with every talk and now I am 
 valualizing my
 polymorphic types a'la Sean Parent

That I think is sketchy advice.

Tbh I am still testing the effects of this change and so far I do see psitive effects on the client code but I also see that what was obvious to users before (everyone understands that they should derive from polymorpic base to get polymorphic effect) is now confusing (how can I customize if I can't derive). Please do elaborate on your view. Always interesting to read on design from smarter people :)
Sep 19 2013
prev sibling next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 9/19/2013 3:18 PM, Szymon Gatner wrote:
 I had similar thoughts when watching GoingNaive 2013:
 http://bartoszmilewski.com/2013/09/19/edward-chands/

http://www.reddit.com/r/programming/comments/1mqpcq/edward_chands_by_bartosz_milewski/ https://news.ycombinator.com/item?id=6414162
Sep 19 2013
parent Walter Bright <newshound2 digitalmars.com> writes:
On 9/19/2013 4:32 PM, Walter Bright wrote:
 On 9/19/2013 3:18 PM, Szymon Gatner wrote:
 I had similar thoughts when watching GoingNaive 2013:
 http://bartoszmilewski.com/2013/09/19/edward-chands/

http://www.reddit.com/r/programming/comments/1mqpcq/edward_chands_by_bartosz_milewski/ https://news.ycombinator.com/item?id=6414162

And Bartosz responds in this one: http://www.reddit.com/r/cpp/comments/1mqhxl/edward_chands_by_bartosz_milewski/
Sep 19 2013
prev sibling next sibling parent "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Fri, Sep 20, 2013 at 12:18:22AM +0200, Szymon Gatner wrote:
 I had similar thoughts when watching GoingNaive 2013:
 http://bartoszmilewski.com/2013/09/19/edward-chands/
 I was more and more scared with every talk and now I am valualizing
 my polymorphic types a'la Sean Parent

Quote: There was so much talk about how not to use C++ that it occurred to me that maybe this wasn’t the problem of incompetent programmers, but that straightforward C++ is plain wrong. So if you just learn the primitives of the language and try to use them, you’re doomed. ... [big snippage] ... I can go on and on like this (and I often do!). Do you see the pattern? Every remedy breeds another remedy. It’s no longer just the C subset that should be avoided. Every new language feature or library addition comes with a new series of gotchas. And you know a new feature is badly designed if Scott Meyers has a talk about it. (His latest was about the pitfalls of, you guessed it, move semantics.) This is sooo true. It reflects my experience with C++. Honestly, it got to a point where I gave up trying to following the remedy upon the patch to another remedy to a third remedy that patches yet another remedy on top of a fundamentally broken core. I just adopt my own C++ coding style and stuck with it. Unfortunately, that approach is unworkable in real-life projects involving more than one programmer. At work, I dread every single time I need to look at the C++ module (which fortunately has been confined to a single module, although it's also one of the largest). For "performance reasons" they eschewed the built-in C++ try/catch constructs, and implemented their own replacements using preprocessor macros. You bet there are memory leaks, pointer bugs, and all sorts of nasty things just from this one "optimization" alone. And it just goes downhill from there. It's this endless cycle of a remedy upon a remedy upon a patch to a remedy that drove me to look for something better. I found D. :) One of the outstanding features of D to me is that code written with simple language constructs are, surprisingly, actually correct. As opposed to C++'s situation of code being wrong by default until you learn the 8th circle black belt advanced level C++ coding techniques. Anyway, this bit sounds interesting: It’s a common but false belief that reference counting (using shared pointers in particular) is better than garbage collection. There is actual research showing that the two approaches are just two sides of the same coin. You should realize that deleting a shared pointer may lead to an arbitrary long pause in program execution, with similar performance characteristics as a garbage sweep. It’s not only because every serious reference counting algorithm must be able to deal with cycles, but also because every time a reference count goes to zero on a piece of data a whole graph of pointers reachable from that object has to be traversed. A data structure built with shared pointers might take a long time to delete and, except for simple cases, you’ll never know which shared pointer will go out of scope last and trigger it. Sounds like D's decision to go with a GC may not be *that* bad after all... Let’s take a great leap of faith and assume that all these things will be standardized and implemented by, say, 2015. Even if that happens, I still don’t think people will be able to use C++ for mainstream parallel programming. C++ has been designed for single thread programming, and parallel programming requires a revolutionary rather than evolutionary change. Two words: data races. Imperative languages offer no protection against data races — maybe with the exception of D. Welp, time to get our act together and clean up that mess that is 'shared', so that D will actually stand a chance of lasting past the next 10 years... ;-) T -- Ruby is essentially Perl minus Wall.
Sep 19 2013
prev sibling next sibling parent reply "bearophile" <bearophileHUGS lycos.com> writes:
Szymon Gatner:

 http://bartoszmilewski.com/2013/09/19/edward-chands/

From the blog post:
Imperative languages offer no protection against data races — 
maybe with the exception of D.<

What about Ada and Rust?
Ask any C++ guru and they will tell you: avoid mutation, avoid 
side effects, don’t use loops, avoid class hierarchies and 
inheritance.<

At Going Native 2013 there was a very good talk that among other things suggests to avoid raw loops in C++ code. But while this is good advice (so much that raw loops are becoming a bit of code smell for me), there are several situations where imperative loops keep being better for me. Explicit recursion is not always more readable and more easy to understand than imperative foreach loops. While most functions could and should be strongly pure, I often like mutation and imperative code inside functions. Haskell is a purely functional language, but I prefer a mix, like D, Scala, F#, etc. So I have to say that perhaps "D is the best functional language"[1]. Bye, bearophile [1] that is as much false as saying that "Haskell is the best imperative language" as some haskellers sometimes say :-)
Sep 19 2013
next sibling parent reply Jacob Carlborg <doob me.com> writes:
On 2013-09-20 11:37, Szymon Gatner wrote:

 If only std algorithms took containers (by that I mean things that
 container for accepts too) as arguments (and not iterators)... even in
 the form of new functions like foreach_all, transform_all etc.

Can't a container be a range as well? -- /Jacob Carlborg
Sep 20 2013
next sibling parent reply Dmitry Olshansky <dmitry.olsh gmail.com> writes:
20-Sep-2013 14:00, Jacob Carlborg пишет:
 On 2013-09-20 11:37, Szymon Gatner wrote:

 If only std algorithms took containers (by that I mean things that
 container for accepts too) as arguments (and not iterators)... even in
 the form of new functions like foreach_all, transform_all etc.

Can't a container be a range as well?

For Christ sake no, no and no. For starters range skips/drops elements when iterating, and thusly iteration has its own state that is useless for a container otherwise. The idea of "contange" spreads like virus, no matter how abominable the end result is. The fact that slices sometimes look like containers (BTW ill-suited for anything beyond primitives/PODish stuff ) must be the corner stone of this belief. Strengthened on the motto of trying to make user defined types to mimic behavior of built-ins it leads to a school of thought that it's fine to blend ownership and access. TL;DR: Suboptimal, unnatural and error prone are keywords. -- Dmitry Olshansky
Sep 20 2013
next sibling parent reply Dmitry Olshansky <dmitry.olsh gmail.com> writes:
20-Sep-2013 15:01, Szymon Gatner пишет:
 On Friday, 20 September 2013 at 10:47:52 UTC, Dmitry Olshansky wrote:
 20-Sep-2013 14:00, Jacob Carlborg пишет:
 On 2013-09-20 11:37, Szymon Gatner wrote:

 If only std algorithms took containers (by that I mean things that
 container for accepts too) as arguments (and not iterators)... even in
 the form of new functions like foreach_all, transform_all etc.

Can't a container be a range as well?

For Christ sake no, no and no. For starters range skips/drops elements when iterating, and thusly iteration has its own state that is useless for a container otherwise.

Iteration is a stateful process, ranges are related to the process of iteration not to containers. As you say state is useless for containers but is necessary to iteration and its context.

A text-book example of self-destruction(?). Ranges (in particular Input/Forward) are not much above encapsulation of iteration, hence must contain that state required to iterate said elements. Which leads to the point that indeed containers have no business being ranges by themselves. The bottom line is: sort(container[]); vs sort(container); Where I hardly see how coupling containers with algorithms can bring even slightest benefit. -- Dmitry Olshansky
Sep 20 2013
parent reply Dmitry Olshansky <dmitry.olsh gmail.com> writes:
20-Sep-2013 17:50, Szymon Gatner пишет:
 On Friday, 20 September 2013 at 13:32:47 UTC, Dmitry Olshansky wrote:
 20-Sep-2013 15:01, Szymon Gatner пишет:
 On Friday, 20 September 2013 at 10:47:52 UTC, Dmitry Olshansky wrote:



 A text-book example of self-destruction(?).
 Ranges (in particular Input/Forward) are not much above encapsulation
 of iteration, hence must contain that state required to iterate said
 elements. Which leads to the point that indeed containers have no
 business being ranges by themselves.

 The bottom line is:
 sort(container[]);
 vs
 sort(container);

 Where I hardly see how coupling containers with algorithms can bring
 even slightest benefit.

OK so it seems we agree. I never said that containers should be ranges. Ranges are abstraction on iterators and that is it. Single container can have multiple ranges existing at the same time. State is attached to ranges not containers. And most importantly ranges are mutable always, even if underlying container isn't. Ranges are meant to have state. No idea what you mean by self destruction.

Then it may be a misunderstanding on my part. I was referring to your previous reply. Where I basically said:
 Can't a container be a range as well?


For Christ sake no, no and no.

[... Because that would be ...]
 TL;DR: Suboptimal, unnatural and error prone are keywords.

Then your question - Why would it be suboptimal? Which your second reply seem to clearly explain: extra state placed where it doesn't belong. I can't easily correlate your two answers as they look as if the second one answers questions of the first. Anyhow we are in agreement here. -- Dmitry Olshansky
Sep 20 2013
parent Dmitry Olshansky <dmitry.olsh gmail.com> writes:
20-Sep-2013 18:13, Szymon Gatner пишет:
 On Friday, 20 September 2013 at 14:04:06 UTC, Dmitry Olshansky wrote:

 Can't a container be a range as well?


For Christ sake no, no and no.

[... Because that would be ...]
 TL;DR: Suboptimal, unnatural and error prone are keywords.

Then your question - Why would it be suboptimal? Which your second reply seem to clearly explain: extra state placed where it doesn't belong. I can't easily correlate your two answers as they look as if the second one answers questions of the first. Anyhow we are in agreement here.

Mind that "Can't a container be a range as well?" was not from me.

Yup, but it was what I was answering ... when you asked about why would it be suboptimal... A container that is a range at the same time is suboptimal, that's all.
 Still, moving computation over a range from for loop body into the range
 implementation (like filtering) incurs no performance penalty and has
 additional benefits of composability and maintainability. Do you mean
 that is suboptimal?

I made no statement on this I think. It shouldn't be ineffective. The only problem with it that I foresee is that in order to take advantage of vectorized SIMD in CPU it could be harder to auto-magically vectorize such code for compiler. For that matter I think e.g. providing explicitly a range of float4 fells like a better way to go. -- Dmitry Olshansky
Sep 20 2013
prev sibling next sibling parent reply Dmitry Olshansky <dmitry.olsh gmail.com> writes:
20-Sep-2013 14:55, Szymon Gatner пишет:
 On Friday, 20 September 2013 at 10:47:52 UTC, Dmitry Olshansky wrote:
 20-Sep-2013 14:00, Jacob Carlborg пишет:
 On 2013-09-20 11:37, Szymon Gatner wrote:

 If only std algorithms took containers (by that I mean things that
 container for accepts too) as arguments (and not iterators)... even in
 the form of new functions like foreach_all, transform_all etc.

Can't a container be a range as well?

For Christ sake no, no and no. For starters range skips/drops elements when iterating, and thusly iteration has its own state that is useless for a container otherwise.

That would be a filtering range which adds additional logic that costs exactly the same as an if() statement inside a for loop when filtering on condition manually.

Better examples are traversing e.g. binary-tree depth-first in-order, or post-order and that would require state. Again it's easy to miss by looking at built-in arrays.
 TL;DR: Suboptimal, unnatural and error prone are keywords.

Why would it be suboptimal?

If said tree needs to be a range I would be horrified to see how it manges to be at the same time a range that (for the sake of example) traverses said tree depth-first in-order. Not even talking about trees having no one "natural" range over them. -- Dmitry Olshansky
Sep 20 2013
next sibling parent Dmitry Olshansky <dmitry.olsh gmail.com> writes:
20-Sep-2013 18:48, H. S. Teoh пишет:
 On Fri, Sep 20, 2013 at 05:36:59PM +0400, Dmitry Olshansky wrote:
 20-Sep-2013 14:55, Szymon Gatner пишет:
 On Friday, 20 September 2013 at 10:47:52 UTC, Dmitry Olshansky wrote:
 20-Sep-2013 14:00, Jacob Carlborg пишет:
 On 2013-09-20 11:37, Szymon Gatner wrote:

 If only std algorithms took containers (by that I mean things that
 container for accepts too) as arguments (and not iterators)...
 even in the form of new functions like foreach_all, transform_all
 etc.

Can't a container be a range as well?




A container should not be confused with a range. That way leads to dragons. :-P (It's rather unfortunate that built-in arrays conflate the two, it leads to a lot of wrong code that works only with arrays but not with "real" ranges.)
 For Christ sake no, no and no. For starters range skips/drops
 elements when iterating, and thusly iteration has its own state that
 is useless for a container otherwise.

That would be a filtering range which adds additional logic that costs exactly the same as an if() statement inside a for loop when filtering on condition manually.

Better examples are traversing e.g. binary-tree depth-first in-order, or post-order and that would require state. Again it's easy to miss by looking at built-in arrays.

Traversing the tree at all requires state, whether or not you do it with ranges. You either put the state on the runtime stack (recursion), or you put the state on the heap in the range adaptor. Same thing.
 TL;DR: Suboptimal, unnatural and error prone are keywords.

Why would it be suboptimal?

If said tree needs to be a range I would be horrified to see how it manges to be at the same time a range that (for the sake of example) traverses said tree depth-first in-order. Not even talking about trees having no one "natural" range over them.

Some trees do, like document trees. But yeah, in general, you don't have a natural ordering to trees.

 But then again, the whole point of ranges is *linear* traversal. If
 you're not doing that, then it's probably the wrong abstraction to use.

Then traversing all files in directory in specific order is not interesting to you? I hardly find that logical. Abstraction is wrong only where it doesn't make sense and here it does. There is something linear about tree and that is every sane path taken through it is linear (after all there are no cycles). BTW I see nothing problematic in seeking a minimal set of primitives that would allow working on tree-like topology. It won't be the same range but never the less useful.
 (FWIW, while SortedRange is a neat hack, I haven't actually found it
 that useful in practice. Most of the ranges I actually use are for the
 cases where it's an input/forward range and you don't want to store the
 whole thing, so random access is impractical.

Sorted forward range is actually fine. You may do interesting things with them like O(N+M) merging, subtracting and whatnot. Set operations for the win (they are not defined in Phobos yet).
 The cases where I actually
 have random access usually turn out to be plain ole arrays anyway, so
 the SortedRange abstraction isn't really necessary.)

 So for trees, I'd argue that you need a tree-specific iteration
 abstraction, not ranges, which are linear. It's a clear case of
 structure conflict. ;-)


 T

-- Dmitry Olshansky
Sep 20 2013
prev sibling next sibling parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 9/20/13 7:55 AM, Craig Dillabaugh wrote:
 Excuse my lack of knowledge on Ranges, so maybe what I am proposing is
 infeasible, for a binary tree for example couldn't you have a
 InOrderRange, PreOrderRange, and PostOrderRange or alternately
 DepthFirstRange, and BreadFirstRange.  From a consumers view those would
 be linear operations?

The tree object would have functions returning by value each of these range types. Andrei
Sep 20 2013
prev sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 9/20/13 8:13 AM, Joseph Rushton Wakeling wrote:
 On 20/09/13 16:48, H. S. Teoh wrote:
 A container should not be confused with a range. That way leads to
 dragons. :-P  (It's rather unfortunate that built-in arrays conflate the
 two, it leads to a lot of wrong code that works only with arrays but not
 with "real" ranges.)

Built-in arrays are not _always_ ranges. Consider const(int[]) ... as I found out recently, it's _not_ a range, because you can't popFront on a const entity.

Well you can't call popFront on any const range. After some time I learned to make peace with the dual nature of built-in slices. Part of that was the failed experiment to introduce the type "new T[]". Another part is the simple realization that built-in slices are not "regular" ranges, they have deeper connections in D's object model, so they're allowed to be a little special. Andrei
Sep 20 2013
next sibling parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 9/20/13 9:46 AM, Szymon Gatner wrote:
 On Friday, 20 September 2013 at 16:38:29 UTC, Andrei Alexandrescu wrote:
 On 9/20/13 8:13 AM, Joseph Rushton Wakeling wrote:
 On 20/09/13 16:48, H. S. Teoh wrote:
 A container should not be confused with a range. That way leads to
 dragons. :-P  (It's rather unfortunate that built-in arrays conflate
 the
 two, it leads to a lot of wrong code that works only with arrays but
 not
 with "real" ranges.)

Built-in arrays are not _always_ ranges. Consider const(int[]) ... as I found out recently, it's _not_ a range, because you can't popFront on a const entity.

Well you can't call popFront on any const range.

What is even the purpose of const ranges? It makes little sense imho.

I'd agree. As an aside, they don't have to make sense. Andrei
Sep 20 2013
prev sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 9/20/13 10:02 AM, Szymon Gatner wrote:
 On Friday, 20 September 2013 at 16:57:43 UTC, Jonathan M Davis wrote:

 If an object is const, then all of its members are const, which means
 that any
 ranges you get from its members will be const, making such ranges
 useless.

That is so weird to hear considering I added ranges to my C++ code and my Vector<T>::all() const can easily return non-const range even tho container is itself const. This kinda looks like D is more limited in that area than C++... Or I really am not getting something.

Yah, it should be possible for a const container to offer ranges over its (non-modifiable) elements. Andrei
Sep 20 2013
parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 9/20/13 10:52 AM, Jonathan M Davis wrote:
 On Friday, September 20, 2013 10:47:15 Andrei Alexandrescu wrote:
 On 9/20/13 10:02 AM, Szymon Gatner wrote:
 On Friday, 20 September 2013 at 16:57:43 UTC, Jonathan M Davis wrote:
 If an object is const, then all of its members are const, which means
 that any
 ranges you get from its members will be const, making such ranges
 useless.

That is so weird to hear considering I added ranges to my C++ code and my Vector<T>::all() const can easily return non-const range even tho container is itself const. This kinda looks like D is more limited in that area than C++... Or I really am not getting something.

Yah, it should be possible for a const container to offer ranges over its (non-modifiable) elements.

That's probably easy to do if the container is well-written, because the container creates the range. The problem is converting a const range to tail const one, which is what you have to do if you ever end up with a const range for any reason, and if you're using const much, that's trivial to do (especially if you ever have a range as a member variable). Unfortunately, in practice, that conversion is quite difficult to do with user-defined types even though it works just fine with arrays.

I'm not sure such a conversion is needed all that often. Andrei
Sep 20 2013
prev sibling next sibling parent Dmitry Olshansky <dmitry.olsh gmail.com> writes:
20-Sep-2013 15:08, Nick Sabalausky пишет:
 On Fri, 20 Sep 2013 14:47:38 +0400
 Dmitry Olshansky <dmitry.olsh gmail.com> wrote:
 For Christ sake no, no and no. For starters range skips/drops
 elements when iterating, and thusly iteration has its own state that
 is useless for a container otherwise.

 The idea of "contange" spreads like virus, no matter how abominable
 the end result is. The fact that slices sometimes look like
 containers (BTW ill-suited for anything beyond primitives/PODish
 stuff )  must be the corner stone of this belief. Strengthened on the
 motto of trying to make user defined types to mimic behavior of
 built-ins it leads to a school of thought that it's fine to blend
 ownership and access.

I can vouch that ease of conflating array/slice/range, while normally a wonderful convenience, has indeed led me into confusion when trying to make a custom type range-compatible. I felt naturally inclined to add the range primitives to the type itself, but when that led to issues like you described (especially the fact that a range *consumes* its elements while iterating, plus the inability to provide alternate views of the same data - which is a very powerful tool IMO),

Aye. I finally started to grok that a container
 needs to *provide* a range, and not actually *be* one.

This ^^ I can't stress that last statement enough.
 Well, except maybe for output ranges. (I think?)

Output ranges in my mind may wrap a suitable container by ref. I think that adding 'put' to container is not a bad idea. It would essentially do insertAny(..) implying that order of insertion is not interesting. However since there are many ways of inserting items (insertFront/insertBack?) it makes more sense in general to have simple helper sink. backInserter ? Silly but simple to remember.
 TL;DR: Suboptimal, unnatural and error prone are keywords.

They are? Cool!

Yes :) 1. Suboptimal - extra state to worry about, with mutation backed-in. Harder path to immutability/shared. 2. Unnatural - the reason to have ranges in the first place? Now we pass containers to algorithms ... The decoupling is lost. 3. Error prone - unclean semantics. Does iterating a container mutate it? If not what about iterating it twice in a loop (inner-outer)? Is there a copy somewhere? Copy of what? I can go on for days here.
 auto foo(T)(real a, unnatural b, lazy Suboptimal!T opts) {...}

 Looks fun! :)

-- Dmitry Olshansky
Sep 20 2013
prev sibling parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 9/20/13 3:47 AM, Dmitry Olshansky wrote:
 20-Sep-2013 14:00, Jacob Carlborg пишет:
 On 2013-09-20 11:37, Szymon Gatner wrote:

 If only std algorithms took containers (by that I mean things that
 container for accepts too) as arguments (and not iterators)... even in
 the form of new functions like foreach_all, transform_all etc.

Can't a container be a range as well?

For Christ sake no, no and no. For starters range skips/drops elements when iterating, and thusly iteration has its own state that is useless for a container otherwise. The idea of "contange" spreads like virus, no matter how abominable the end result is. The fact that slices sometimes look like containers (BTW ill-suited for anything beyond primitives/PODish stuff ) must be the corner stone of this belief. Strengthened on the motto of trying to make user defined types to mimic behavior of built-ins it leads to a school of thought that it's fine to blend ownership and access. TL;DR: Suboptimal, unnatural and error prone are keywords.

Agreed. A container is a range factory. Andrei
Sep 20 2013
prev sibling next sibling parent Jacob Carlborg <doob me.com> writes:
On 2013-09-20 12:02, Szymon Gatner wrote:

 Not sure what you mean but C++ has no concept of a range. Apparent there
 is now a study group but atm standard algos are crippled compared to
 "container for".

Oh, I was thinking about D. Sorry for the confusion. -- /Jacob Carlborg
Sep 20 2013
prev sibling next sibling parent Joseph Rushton Wakeling <joseph.wakeling webdrake.net> writes:
On 20/09/13 16:48, H. S. Teoh wrote:
 A container should not be confused with a range. That way leads to
 dragons. :-P  (It's rather unfortunate that built-in arrays conflate the
 two, it leads to a lot of wrong code that works only with arrays but not
 with "real" ranges.)

Built-in arrays are not _always_ ranges. Consider const(int[]) ... as I found out recently, it's _not_ a range, because you can't popFront on a const entity.
Sep 20 2013
prev sibling next sibling parent Joseph Rushton Wakeling <joseph.wakeling webdrake.net> writes:
On 20/09/13 17:22, H. S. Teoh wrote:
 Which makes it even more confusing, since newbies would probably equate
 arrays (the container) with ranges from their frequent use in range
 examples.

Yes, it's completely non-obvious. I think the first time I realized the range/container distinction was when I tried experimentally replacing some built-in arrays with std.container.Array and discovered that I couldn't use them in a foreach loop.
 Perhaps it's more useful to think of T[] not as an array per se, but as
 a *slice* of the underlying array data which is managed by druntime. I
 think I'm OK with saying that arrays (i.e. the underlying data) are
 containers, while slices (what we think of today as "arrays") are
 ranges.

It's not a bad description, but I'm not sure that takes into account the const(T[]) case.
Sep 20 2013
prev sibling parent Joseph Rushton Wakeling <joseph.wakeling webdrake.net> writes:
On 20/09/13 20:53, H. S. Teoh wrote:
 Sad to say, I encountered a good number of Phobos bugs caused by the
 conflation between built-in arrays and ranges. Code would inadvertently
 assume array behaviour on ranges, and break when you pass in a non-array
 range. Some of these have been fixed; I'm pretty sure there are still
 more lurking around.

In my case, I discovered a number of bugs that stemmed from unittests where the default range type used was an array. In that case, it wasn't so much the conflation of range and container, but the fact that arrays are random-access ranges and so have the maximal set of range properties -- so code that _theoretically_ should have worked with input ranges was actually failing, because it wasn't being tested.
 Actually, it helps you understand the const(T[]) case. To iterate over a
 const array, you need a range over it (i.e., a slice); and indeed,
 writing arr[] on an array of type const(T[]) gives you a tail-const
 slice of type const(T)[], which *is* an iterable range.

 The confusion really just stems from the conflation between T[] and a
 range over T[] in the non-const case.

Ahh, OK. You're right, that does help. Thank you! :-)
Sep 24 2013
prev sibling next sibling parent reply Paulo Pinto <pjmlp progtools.org> writes:
Am 20.09.2013 16:24, schrieb renoX:
 On Friday, 20 September 2013 at 09:43:10 UTC, bearophile wrote:
 [cut]
 Another thing to notice is that Haskell user defined operators
 sometimes hurt my ability to read code. This is composed with the
 precedent problem.

++ I remember that reading a tutorial about Elm felt much more easy to read than other text based on Haskell because Elm's author took great care of choosing readable operators.. That said, he made the same mistake as Haskell's authors: currying is a *mathematical detail* which shouldn't obscure function type: 'f: a->b->c' is less readable than 'f: a,b->c'. renoX

That is standard in all languages of the ML family, not only Haskell.
Sep 20 2013
parent Timon Gehr <timon.gehr gmx.ch> writes:
On 09/23/2013 02:08 PM, renoX wrote:
 On Friday, 20 September 2013 at 15:23:20 UTC, Paulo Pinto wrote:
 Am 20.09.2013 16:24, schrieb renoX:
 That said, he made the same mistake as Haskell's authors: currying is a
 *mathematical detail* which shouldn't obscure function type:
 'f: a->b->c' is less readable than 'f: a,b->c'.

 renoX

That is standard in all languages of the ML family, not only Haskell.

I know, but 'standard' doesn't mean that it is a good idea.. renoX

Neither does 'mathematical detail' mean it is obscure, unreadable or a mistake as you seem to imply.
Sep 24 2013
prev sibling next sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 9/20/13 2:36 AM, bearophile wrote:
 In D it's often better to use std.algorithm/range instead of raw foreach
 loops (despite probably unlike C++ in D a heavy use of ranges leads to a
 lower performance, even if you use the LDC2 compiler).

Why would there be a performance loss? Andrei
Sep 20 2013
next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 9/20/2013 8:54 AM, Joseph Rushton Wakeling wrote:
 On 20/09/13 17:28, Andrei Alexandrescu wrote:
 Why would there be a performance loss?

Depends on the particular case, but my experience is that _in practice_ stuff based around range interfaces can often be slower than raw iteration. I don't think anyone's saying a performance loss is inevitable or unavoidable, but there currently is one.

I know that, at least with dmd's back end, it's because the optimizer was built around the kinds of things that C++ programmers tend to write. The D range/algorithm generates unusual code (for the back end) that the back end doesn't optimize for. For example, ranges tend to lump several variables together and call them a 'struct'. The back end is not tuned to deal with structs as an aggregate of discreet 'variables', meaning that such variables don't get assigned to registers. Structs are treated as a lump. This is not a fundamental performance problem with D. The back end needs to be improved to "dis-aggregate" structs back into discreet variables.
Sep 20 2013
next sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 9/20/13 9:28 AM, Joseph Rushton Wakeling wrote:
 On 20/09/13 18:01, Walter Bright wrote:
 I know that, at least with dmd's back end, it's because the optimizer
 was built
 around the kinds of things that C++ programmers tend to write. The D
 range/algorithm generates unusual code (for the back end) that the
 back end
 doesn't optimize for.

 For example, ranges tend to lump several variables together and call
 them a
 'struct'. The back end is not tuned to deal with structs as an
 aggregate of
 discreet 'variables', meaning that such variables don't get assigned to
 registers. Structs are treated as a lump.

 This is not a fundamental performance problem with D.

 The back end needs to be improved to "dis-aggregate" structs back into
 discreet
 variables.

I wouldn't single out DMD for criticism -- I don't know to what extent the underlying reasons overlap, but all the compilers cope less well with range-based material than they might in theory. The canonical example would be something like, foreach (i; iota(10)) { ... } which in theory shouldn't be any slower than, foreach (i; 0 .. 10) { ... } but in practice is, no matter what the compiler.

I think I know how to fix that. I hypothesize it's about using actual increment instead of a stored value "step" for the particular case when step is 1. Andrei
Sep 20 2013
parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 9/20/13 11:21 AM, Joseph Rushton Wakeling wrote:
 On 20/09/13 19:41, Andrei Alexandrescu wrote:
 On 9/20/13 9:28 AM, Joseph Rushton Wakeling wrote:
 The canonical example would be something like,

      foreach (i; iota(10)) { ... }

 which in theory shouldn't be any slower than,

      foreach (i; 0 .. 10) { ... }

 but in practice is, no matter what the compiler.

I think I know how to fix that. I hypothesize it's about using actual increment instead of a stored value "step" for the particular case when step is 1.

Excellent, that'll be great to see :-)

http://d.puremagic.com/issues/show_bug.cgi?id=11077 Andrei
Sep 20 2013
prev sibling parent Joseph Rushton Wakeling <joseph.wakeling webdrake.net> writes:
On 20/09/13 19:41, Andrei Alexandrescu wrote:
 On 9/20/13 9:28 AM, Joseph Rushton Wakeling wrote:
 The canonical example would be something like,

      foreach (i; iota(10)) { ... }

 which in theory shouldn't be any slower than,

      foreach (i; 0 .. 10) { ... }

 but in practice is, no matter what the compiler.

I think I know how to fix that. I hypothesize it's about using actual increment instead of a stored value "step" for the particular case when step is 1.

Excellent, that'll be great to see :-)
Sep 20 2013
prev sibling next sibling parent Joseph Rushton Wakeling <joseph.wakeling webdrake.net> writes:
On 20/09/13 18:01, Walter Bright wrote:
 I know that, at least with dmd's back end, it's because the optimizer was built
 around the kinds of things that C++ programmers tend to write. The D
 range/algorithm generates unusual code (for the back end) that the back end
 doesn't optimize for.

 For example, ranges tend to lump several variables together and call them a
 'struct'. The back end is not tuned to deal with structs as an aggregate of
 discreet 'variables', meaning that such variables don't get assigned to
 registers. Structs are treated as a lump.

 This is not a fundamental performance problem with D.

 The back end needs to be improved to "dis-aggregate" structs back into discreet
 variables.

I wouldn't single out DMD for criticism -- I don't know to what extent the underlying reasons overlap, but all the compilers cope less well with range-based material than they might in theory. The canonical example would be something like, foreach (i; iota(10)) { ... } which in theory shouldn't be any slower than, foreach (i; 0 .. 10) { ... } but in practice is, no matter what the compiler.
Sep 20 2013
prev sibling next sibling parent reply Piotr Szturmaj <bncrbme jadamspam.pl> writes:
On 20.09.2013 17:28, Andrei Alexandrescu wrote:
 On 9/20/13 2:36 AM, bearophile wrote:
 In D it's often better to use std.algorithm/range instead of raw foreach
 loops (despite probably unlike C++ in D a heavy use of ranges leads to a
 lower performance, even if you use the LDC2 compiler).

Why would there be a performance loss?

Because range concept underutilize current CPUs. I don't know if all empty/front/popFront are inlined. I suppose some templated functions are, but surely not all of them. What will be faster, calling all of those functions for each element or traversing memory using a loop? Of course 2nd option will be faster, because the act of traversing (cached) memory does not have the overhead of a function call. Some time ago I proposed a hybrid range concept where the front() is an array. All algorithms would first traverse the memory in this array, then call popFront to get another array (slice). In this way, function call overhead is minimized because it's paid sparsely rather than for each range element.
Sep 20 2013
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 9/20/13 9:51 AM, Piotr Szturmaj wrote:
 On 20.09.2013 17:28, Andrei Alexandrescu wrote:
 On 9/20/13 2:36 AM, bearophile wrote:
 In D it's often better to use std.algorithm/range instead of raw foreach
 loops (despite probably unlike C++ in D a heavy use of ranges leads to a
 lower performance, even if you use the LDC2 compiler).

Why would there be a performance loss?

Because range concept underutilize current CPUs. I don't know if all empty/front/popFront are inlined. I suppose some templated functions are, but surely not all of them. What will be faster, calling all of those functions for each element or traversing memory using a loop?

This is just a supposition. Inlining is automatic and just works subject to the constraints chosen by the implementers. It's not like there's a guy sitting there and getting bored of inlining.
 Of course 2nd option will be faster, because the act of traversing
 (cached) memory does not have the overhead of a function call.

No, no.
 Some time ago I proposed a hybrid range concept where the front() is an
 array. All algorithms would first traverse the memory in this array,
 then call popFront to get another array (slice). In this way, function
 call overhead is minimized because it's paid sparsely rather than for
 each range element.

That's a good idea but for completely different reasons. Andrei
Sep 20 2013
next sibling parent Piotr Szturmaj <bncrbme jadamspam.pl> writes:
On 20.09.2013 19:45, Andrei Alexandrescu wrote:
 On 9/20/13 9:51 AM, Piotr Szturmaj wrote:
 On 20.09.2013 17:28, Andrei Alexandrescu wrote:
 On 9/20/13 2:36 AM, bearophile wrote:
 In D it's often better to use std.algorithm/range instead of raw
 foreach
 loops (despite probably unlike C++ in D a heavy use of ranges leads
 to a
 lower performance, even if you use the LDC2 compiler).

Why would there be a performance loss?

Because range concept underutilize current CPUs. I don't know if all empty/front/popFront are inlined. I suppose some templated functions are, but surely not all of them. What will be faster, calling all of those functions for each element or traversing memory using a loop?

This is just a supposition. Inlining is automatic and just works subject to the constraints chosen by the implementers. It's not like there's a guy sitting there and getting bored of inlining.

This was not my point. I know that inlining is automatic, but certainly not all functions are automatically inlined and all of those non-inlined carry some overhead.
 Of course 2nd option will be faster, because the act of traversing
 (cached) memory does not have the overhead of a function call.

No, no.
 Some time ago I proposed a hybrid range concept where the front() is an
 array. All algorithms would first traverse the memory in this array,
 then call popFront to get another array (slice). In this way, function
 call overhead is minimized because it's paid sparsely rather than for
 each range element.

That's a good idea but for completely different reasons.

Could you elaborate? :)
Sep 20 2013
prev sibling parent Walter Bright <newshound2 digitalmars.com> writes:
On 9/20/2013 10:45 AM, Andrei Alexandrescu wrote:
 It's not like there's a guy sitting there and getting bored of inlining.

The nice thing about modern optimizers is they are indefatigable. They don't get bored, go on strike, come to work high, :-)
Sep 20 2013
prev sibling parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 9/20/13 8:54 AM, Joseph Rushton Wakeling wrote:
 On 20/09/13 17:28, Andrei Alexandrescu wrote:
 Why would there be a performance loss?

Depends on the particular case, but my experience is that _in practice_ stuff based around range interfaces can often be slower than raw iteration. I don't think anyone's saying a performance loss is inevitable or unavoidable, but there currently is one.

With dmd that's indeed the case. But LLVM is good at enregistering small structs, which is how ranges are implemented. Andrei
Sep 20 2013
prev sibling parent Joseph Rushton Wakeling <joseph.wakeling webdrake.net> writes:
On 20/09/13 17:28, Andrei Alexandrescu wrote:
 Why would there be a performance loss?

Depends on the particular case, but my experience is that _in practice_ stuff based around range interfaces can often be slower than raw iteration. I don't think anyone's saying a performance loss is inevitable or unavoidable, but there currently is one.
Sep 20 2013
prev sibling next sibling parent Nick Sabalausky <SeeWebsiteToContactMe semitwist.com> writes:
On Thu, 19 Sep 2013 16:48:43 -0700
"H. S. Teoh" <hsteoh quickfur.ath.cx> wrote:

 On Fri, Sep 20, 2013 at 12:18:22AM +0200, Szymon Gatner wrote:
 I had similar thoughts when watching GoingNaive 2013:
 http://bartoszmilewski.com/2013/09/19/edward-chands/
 I was more and more scared with every talk and now I am valualizing
 my polymorphic types a'la Sean Parent


Heh, I love the article's title. Although: "[Edward Scissorhands is] a darker version of Pinocchio, shot in suburban settings." I saw Edward Scissorhands as being a re-telling of Frankenstein: A grotesque, but kind, man-made creature is ostracized by the frightened villagers to ultimately tragic results. Although it could be argued that Pinocchio is a lighter re-imagining of Frankenstein anyway.
 
 It's this endless cycle of a remedy upon a remedy upon a patch to a
 remedy that drove me to look for something better. I found D. :)
 

So true.
Sep 19 2013
prev sibling next sibling parent "PauloPinto" <pjmlp progtools.org> writes:
On Friday, 20 September 2013 at 02:24:31 UTC, bearophile wrote:
 Szymon Gatner:

 http://bartoszmilewski.com/2013/09/19/edward-chands/

From the blog post:
Imperative languages offer no protection against data races — 
maybe with the exception of D.<

What about Ada and Rust?

Many people in the C family of languages tend to disregard the Pascal family. Sometimes I dream of a world where Modula-2 would have taken C's place. :) .. Paulo
Sep 20 2013
prev sibling next sibling parent "PauloPinto" <pjmlp progtools.org> writes:
On Thursday, 19 September 2013 at 23:50:04 UTC, H. S. Teoh wrote:
 On Fri, Sep 20, 2013 at 12:18:22AM +0200, Szymon Gatner wrote:
 I had similar thoughts when watching GoingNaive 2013:
 http://bartoszmilewski.com/2013/09/19/edward-chands/
 I was more and more scared with every talk and now I am 
 valualizing
 my polymorphic types a'la Sean Parent

Quote: There was so much talk about how not to use C++ that it occurred to me that maybe this wasn’t the problem of incompetent programmers, but that straightforward C++ is plain wrong. So if you just learn the primitives of the language and try to use them, you’re doomed. ... [big snippage] ... I can go on and on like this (and I often do!). Do you see the pattern? Every remedy breeds another remedy. It’s no longer just the C subset that should be avoided. Every new language feature or library addition comes with a new series of gotchas. And you know a new feature is badly designed if Scott Meyers has a talk about it. (His latest was about the pitfalls of, you guessed it, move semantics.) This is sooo true. It reflects my experience with C++. Honestly, it got to a point where I gave up trying to following the remedy upon the patch to another remedy to a third remedy that patches yet another remedy on top of a fundamentally broken core. [... cutted]

I dislike C, and will take C++ safety and abstraction capabilities over C, unless forced to do otherwise. Now, having said this. I hardly write any C++ nowadays. In the types of projects we do, it is all about JVM and .NET languages. Sometimes even replacing "legacy C++" systems by new systems done in those languages. So writing C++, or even C, tends to be restricted to a few method calls. For example, recently we had a project for real time data analysis on Windows. It was a C#/WPF application. C++ was only used for the hardware interfaces and SIMD optimizations for a few algorithms.
 Sounds like D's decision to go with a GC may not be *that* bad 
 after
 all...

I like GC enabled systems programming languages since I used Oberon, and had some contact with Modula-3. Like many things in programming, the only way to convince other developers is to have them use such systems. -- Paulo
Sep 20 2013
prev sibling next sibling parent "QAston" <qaston gmail.com> writes:
On Thursday, 19 September 2013 at 22:46:09 UTC, Andrei 
Alexandrescu wrote:
 On 9/19/13 3:18 PM, Szymon Gatner wrote:
 I had similar thoughts when watching GoingNaive 2013:
 http://bartoszmilewski.com/2013/09/19/edward-chands/

Nice piece.
 I was more and more scared with every talk and now I am 
 valualizing my
 polymorphic types a'la Sean Parent

That I think is sketchy advice. Andrei

What Sean Parent does in his value semantics talk is basically an interface adapter for a struct, wrapped in a struct providing implicit conversions. By using structs by default and adapting them to interfaces as needed you get to have the smallest possible overhead on a single function call - either no virtual dispatch or 1 virtual dispatch. When writing adapters to interfaces you get 2 or more. The other benefit is that approach is reducing dependencies you need to know about when you use the wrapper type - the approach would not work in D due to lack of argument dependent lookup so you need to write the adapter type (for a particular type you want to convert) yourself anyways. And I think it's better that way. In the end the whole thing is just adapter design pattern applied to C++.
Sep 20 2013
prev sibling next sibling parent "QAston" <qaston gmail.com> writes:
On Friday, 20 September 2013 at 07:39:47 UTC, QAston wrote:
 On Thursday, 19 September 2013 at 22:46:09 UTC, Andrei 
 Alexandrescu wrote:
 On 9/19/13 3:18 PM, Szymon Gatner wrote:
 I had similar thoughts when watching GoingNaive 2013:
 http://bartoszmilewski.com/2013/09/19/edward-chands/

Nice piece.
 I was more and more scared with every talk and now I am 
 valualizing my
 polymorphic types a'la Sean Parent

That I think is sketchy advice. Andrei

What Sean Parent does in his value semantics talk is basically an interface adapter for a struct, wrapped in a struct providing implicit conversions. By using structs by default and adapting them to interfaces as needed you get to have the smallest possible overhead on a single function call - either no virtual dispatch or 1 virtual dispatch. When writing adapters to interfaces you get 2 or more. The other benefit is that approach is reducing dependencies you need to know about when you use the wrapper type - the approach would not work in D due to lack of argument dependent lookup so you need to write the adapter type (for a particular type you want to convert) yourself anyways. And I think it's better that way. In the end the whole thing is just adapter design pattern applied to C++.

Sep 20 2013
prev sibling next sibling parent "Szymon Gatner" <noemail gmail.com> writes:
On Friday, 20 September 2013 at 07:39:47 UTC, QAston wrote:
 On Thursday, 19 September 2013 at 22:46:09 UTC, Andrei 
 Alexandrescu wrote:
 On 9/19/13 3:18 PM, Szymon Gatner wrote:
 I had similar thoughts when watching GoingNaive 2013:
 http://bartoszmilewski.com/2013/09/19/edward-chands/

Nice piece.
 I was more and more scared with every talk and now I am 
 valualizing my
 polymorphic types a'la Sean Parent

That I think is sketchy advice. Andrei

What Sean Parent does in his value semantics talk is basically an interface adapter for a struct, wrapped in a struct providing implicit conversions. By using structs by default and adapting them to interfaces as needed you get to have the smallest possible overhead on a single function call - either no virtual dispatch or 1 virtual dispatch. When writing adapters to interfaces you get 2 or more.

The struct was only an example, obviously you can use proper class implementations with data hiding etc. You will also get at least 1 virtual call because a "concept" is always an abstract class. That being said, I've been using this technique in other places in my code this far too, for example: my C++ ranges (yeah idea stolen from D) are templates so can be composed for free but there are abstractions like InputRange<> which "erase" (this word is stupid btw, no type is erased really) underlying type and provide convenient abstraction in places. I also use this technique for my std containers. Vector is a std::vector with polymorphic allocator, again it holds Allocator<> by value but underlying implementation is (often composition) of allocator templates. In allocator's case I do explicitly want container alloc to be polymorphic type so that my interfaces never depend on allocator type. Difference now for me is that I did sometimes turn value types into polymorphic types using this technique (like with allocators) but it never occurred to me that other wan can be beneficial too.
 The other benefit is that approach is reducing dependencies you 
 need to know about when you use the wrapper type - the approach 
 would not work in D due to lack of argument dependent lookup so 
 you need to write the adapter type (for a particular type you 
 want to convert) yourself anyways. And I think it's better that 
 way.

 In the end the whole thing is just adapter design pattern 
 applied to C++.

Well Adapter is suppose to *change* interface to adapt to client needs so this is a bit of a strech to call it that here, but yeah, this technique called ("external polymorphism" idiom) can be used to adapt . This really just is a (smart)pointer with a full interface of underlying object (which also adds benefit or using "." instead of "->" on target).
Sep 20 2013
prev sibling next sibling parent "bearophile" <bearophileHUGS lycos.com> writes:
More from the blog post:

It’s a common but false belief that reference counting (using 
shared pointers in particular) is better than garbage 
collection. There is actual research showing that the two 
approaches are just two sides of the same coin. You should 
realize that deleting a shared pointer may lead to an arbitrary 
long pause in program execution, with similar performance 
characteristics as a garbage sweep.<

That's an interesting paper, the symmetry between the two forms of garbage collection is a something to remember, but it doesn't contain benchmarks, you can't draw performance conclusions from that paper. Bye, bearophile
Sep 20 2013
prev sibling next sibling parent "QAston" <qaston gmail.com> writes:
On Friday, 20 September 2013 at 08:20:48 UTC, Szymon Gatner wrote:
 The struct was only an example, obviously you can use proper 
 class implementations with data hiding etc. You will also get 
 at least 1 virtual call because a "concept" is always an 
 abstract class.

Direct use of the value type has no indirections.
 That being said, I've been using this technique in other places 
 in my code this far too, for example: my C++ ranges (yeah idea 
 stolen from D) are templates so can be composed for free but 
 there are abstractions like InputRange<> which "erase" (this 
 word is stupid btw, no type is erased really) underlying type 
 and provide convenient abstraction in places. I also use this 
 technique for my std containers. Vector is a std::vector with 
 polymorphic allocator, again it holds Allocator<> by value but 
 underlying implementation is (often composition) of allocator 
 templates. In allocator's case I do explicitly want container 
 alloc to be polymorphic type so that my interfaces never depend 
 on allocator type.

 Difference now for me is that I did sometimes turn value types 
 into polymorphic types using this technique (like with 
 allocators) but it never occurred to me that other wan can be 
 beneficial too.

 Well Adapter is suppose to *change* interface to adapt to 
 client needs so this is a bit of a strech to call it that here, 
 but yeah, this technique called ("external polymorphism" idiom) 
 can be used to adapt . This really just is a (smart)pointer 
 with a full interface of underlying object (which also adds 
 benefit or using "." instead of "->" on target).

Well, I won't argue about naming, for me when a type is a wrapper which provides desired interface to that type is an adapter :P. I agree that this idiom is useful and I use it as needed too.
Sep 20 2013
prev sibling next sibling parent reply "Peter Alexander" <peter.alexander.au gmail.com> writes:
On Friday, 20 September 2013 at 02:24:31 UTC, bearophile wrote:
 At Going Native 2013 there was a very good talk that among 
 other things suggests to avoid raw loops in C++ code. But while 
 this is good advice (so much that raw loops are becoming a bit 
 of code smell for me), there are several situations where 
 imperative loops keep being better for me. Explicit recursion 
 is not always more readable and more easy to understand than 
 imperative foreach loops.

I don't think his advice is to use recursion instead of loops. I believe the point was that raw loops can usually be replaced by something more generic (e.g. std::find, std::rotate, std::transform, etc.). The loop is just an implementation detail of a more high-level algorithm. Within those algorithms, it's ok to use loops.
Sep 20 2013
parent "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Fri, Sep 20, 2013 at 05:42:03PM +0200, Joseph Rushton Wakeling wrote:
 On 20/09/13 17:22, H. S. Teoh wrote:
Which makes it even more confusing, since newbies would probably
equate arrays (the container) with ranges from their frequent use in
range examples.

Yes, it's completely non-obvious. I think the first time I realized the range/container distinction was when I tried experimentally replacing some built-in arrays with std.container.Array and discovered that I couldn't use them in a foreach loop.

Sad to say, I encountered a good number of Phobos bugs caused by the conflation between built-in arrays and ranges. Code would inadvertently assume array behaviour on ranges, and break when you pass in a non-array range. Some of these have been fixed; I'm pretty sure there are still more lurking around.
Perhaps it's more useful to think of T[] not as an array per se, but
as a *slice* of the underlying array data which is managed by
druntime. I think I'm OK with saying that arrays (i.e. the underlying
data) are containers, while slices (what we think of today as
"arrays") are ranges.

It's not a bad description, but I'm not sure that takes into account the const(T[]) case.

Actually, it helps you understand the const(T[]) case. To iterate over a const array, you need a range over it (i.e., a slice); and indeed, writing arr[] on an array of type const(T[]) gives you a tail-const slice of type const(T)[], which *is* an iterable range. The confusion really just stems from the conflation between T[] and a range over T[] in the non-const case. T -- People tell me I'm stubborn, but I refuse to accept it!
Sep 20 2013
prev sibling next sibling parent "PauloPinto" <pjmlp progtools.org> writes:
On Friday, 20 September 2013 at 08:59:44 UTC, Peter Alexander 
wrote:
 On Friday, 20 September 2013 at 02:24:31 UTC, bearophile wrote:
 At Going Native 2013 there was a very good talk that among 
 other things suggests to avoid raw loops in C++ code. But 
 while this is good advice (so much that raw loops are becoming 
 a bit of code smell for me), there are several situations 
 where imperative loops keep being better for me. Explicit 
 recursion is not always more readable and more easy to 
 understand than imperative foreach loops.

I don't think his advice is to use recursion instead of loops. I believe the point was that raw loops can usually be replaced by something more generic (e.g. std::find, std::rotate, std::transform, etc.). The loop is just an implementation detail of a more high-level algorithm. Within those algorithms, it's ok to use loops.

While at the university, I got to read a paper about a HPC computer architecture with NUMA, that used Lisp as their systems programming language. The functional programming approach was the only way to really take proper advantage of the hardware specific architecture. This is why it is better to use algorithms instead of explicit loops, as they can be made to scale. -- Paulo
Sep 20 2013
prev sibling next sibling parent "bearophile" <bearophileHUGS lycos.com> writes:
Peter Alexander:

 On Friday, 20 September 2013 at 02:24:31 UTC, bearophile wrote:
 At Going Native 2013 there was a very good talk that among 
 other things suggests to avoid raw loops in C++ code. But 
 while this is good advice (so much that raw loops are becoming 
 a bit of code smell for me), there are several situations 
 where imperative loops keep being better for me. Explicit 
 recursion is not always more readable and more easy to 
 understand than imperative foreach loops.

I don't think his advice is to use recursion instead of loops. I believe the point was that raw loops can usually be replaced by something more generic (e.g. std::find, std::rotate, std::transform, etc.). The loop is just an implementation detail of a more high-level algorithm. Within those algorithms, it's ok to use loops.

In my comment I have packed too much in too little space. Let me try again: In Haskell you usually don't use explicit recursion. You usually use higher order functions. But once in a while, for performance, to implement those HOFs or for other reasons you use explicit recursion. In that Going Native 2013 talk it was argued that in C++ it's usually better to use STL algorithms instead of raw loops. But sometimes you have to use raw loops, when you implement those algorithms, and in other situations. In D it's often better to use std.algorithm/range instead of raw foreach loops (despite probably unlike C++ in D a heavy use of ranges leads to a lower performance, even if you use the LDC2 compiler). So in Haskell, C++ and D it's better to use higher iteration abstractions instead of raw recursion/iteration, but once in a while you have to use it. From what I've seen so far, in those cases I prefer to use explicit foreach iteration in D instead of explicit recursion in Haskell. Bye, bearophile
Sep 20 2013
prev sibling next sibling parent "Szymon Gatner" <noemail gmail.com> writes:
On Friday, 20 September 2013 at 08:59:44 UTC, Peter Alexander 
wrote:
 On Friday, 20 September 2013 at 02:24:31 UTC, bearophile wrote:
 At Going Native 2013 there was a very good talk that among 
 other things suggests to avoid raw loops in C++ code. But 
 while this is good advice (so much that raw loops are becoming 
 a bit of code smell for me), there are several situations 
 where imperative loops keep being better for me. Explicit 
 recursion is not always more readable and more easy to 
 understand than imperative foreach loops.

I don't think his advice is to use recursion instead of loops. I believe the point was that raw loops can usually be replaced by something more generic (e.g. std::find, std::rotate, std::transform, etc.). The loop is just an implementation detail of a more high-level algorithm. Within those algorithms, it's ok to use loops.

If only std algorithms took containers (by that I mean things that container for accepts too) as arguments (and not iterators)... even in the form of new functions like foreach_all, transform_all etc.
Sep 20 2013
prev sibling next sibling parent "bearophile" <bearophileHUGS lycos.com> writes:
 From what I've seen so far, in those cases I prefer to use 
 explicit foreach iteration in D instead of explicit recursion 
 in Haskell.

Two other things to notice is that Haskell code seems to use too many different tiny functions and higher order functions, so it's hard for me to remember them all. The Haskell code starts to look like APL code. So using HOFs is useful, but beyond a certain number of them I prefer "raw coding". Another thing to notice is that Haskell user defined operators sometimes hurt my ability to read code. This is composed with the precedent problem. Bye, bearophile
Sep 20 2013
prev sibling next sibling parent "Szymon Gatner" <noemail gmail.com> writes:
On Friday, 20 September 2013 at 08:41:34 UTC, QAston wrote:

 Well, I won't argue about naming, for me when a type is a 
 wrapper which provides desired interface to that type is an 
 adapter :P.

I can live with that ;)
Sep 20 2013
prev sibling next sibling parent "Szymon Gatner" <noemail gmail.com> writes:
On Friday, 20 September 2013 at 10:00:32 UTC, Jacob Carlborg 
wrote:
 On 2013-09-20 11:37, Szymon Gatner wrote:

 If only std algorithms took containers (by that I mean things 
 that
 container for accepts too) as arguments (and not iterators)... 
 even in
 the form of new functions like foreach_all, transform_all etc.

Can't a container be a range as well?

Not sure what you mean but C++ has no concept of a range. Apparent there is now a study group but atm standard algos are crippled compared to "container for".
Sep 20 2013
prev sibling next sibling parent Andrej Mitrovic <andrej.mitrovich gmail.com> writes:
On 9/20/13, Szymon Gatner <noemail gmail.com> wrote:
 I had similar thoughts when watching GoingNaive 2013.

Yeah. For example, Andrei's talk was very entertaining, but it was filled with "well I'm using this trick here, but I'll explain this later" bits, which just made following the whole thing really difficult, and it's all because C++ has become a monster.
Sep 20 2013
prev sibling next sibling parent "Szymon Gatner" <noemail gmail.com> writes:
On Friday, 20 September 2013 at 10:02:58 UTC, Szymon Gatner wrote:
 On Friday, 20 September 2013 at 10:00:32 UTC, Jacob Carlborg 
 wrote:
 On 2013-09-20 11:37, Szymon Gatner wrote:

 If only std algorithms took containers (by that I mean things 
 that
 container for accepts too) as arguments (and not 
 iterators)... even in
 the form of new functions like foreach_all, transform_all etc.

Can't a container be a range as well?

Not sure what you mean but C++ has no concept of a range. Apparent there is now a study group but atm standard algos are crippled compared to "container for".

To clarify what I mean by crippled: standard algorithms were suppose to make code more expressive and one of the ways they would do it was to eliminate tedious iterator-based for loops: instead of for (typename Contaniner<T>::const_iterator it = c.begin(), it != c.end() ++it) {..} you could do: std::for_each(c.begin(), c.end(), [](T&) {...};) (of course this all only makes any sense with lambdas, without them it is really hard to argue about expressivenes in the first place) but now we can do this: for (auto& v : c) {} which is even nicer than the for_each version. Algorithms do more than iterating of course, but it is a shame they didn't evolve as basic constructs.
Sep 20 2013
prev sibling next sibling parent "Szymon Gatner" <noemail gmail.com> writes:
On Friday, 20 September 2013 at 10:02:58 UTC, Szymon Gatner wrote:
 On Friday, 20 September 2013 at 10:00:32 UTC, Jacob Carlborg 
 wrote:
 On 2013-09-20 11:37, Szymon Gatner wrote:

 If only std algorithms took containers (by that I mean things 
 that
 container for accepts too) as arguments (and not 
 iterators)... even in
 the form of new functions like foreach_all, transform_all etc.

Can't a container be a range as well?

Not sure what you mean but C++ has no concept of a range. Apparent there is now a study group but atm standard algos are crippled compared to "container for".

To clarify what I mean by crippled: standard algorithms were suppose to make code more expressive and one of the ways they would do it was to eliminate tedious iterator-based for loops: instead of for (typename Contaniner<T>::const_iterator it = c.begin(), it != c.end() ++it) {..} you could do: std::for_each(c.begin(), c.end(), [](T&) {...};) (of course this all only makes any sense with lambdas, without them it is really hard to argue about expressivenes in the first place) but now we can do this: for (auto& v : c) {} which is even nicer than the for_each version. Algorithms do more than iterating of course, but it is a shame they didn't evolve as basic constructs.
Sep 20 2013
prev sibling next sibling parent "Szymon Gatner" <noemail gmail.com> writes:
On Friday, 20 September 2013 at 10:47:52 UTC, Dmitry Olshansky 
wrote:
 20-Sep-2013 14:00, Jacob Carlborg пишет:
 On 2013-09-20 11:37, Szymon Gatner wrote:

 If only std algorithms took containers (by that I mean things 
 that
 container for accepts too) as arguments (and not 
 iterators)... even in
 the form of new functions like foreach_all, transform_all etc.

Can't a container be a range as well?

For Christ sake no, no and no. For starters range skips/drops elements when iterating, and thusly iteration has its own state that is useless for a container otherwise.

That would be a filtering range which adds additional logic that costs exactly the same as an if() statement inside a for loop when filtering on condition manually.
 TL;DR: Suboptimal, unnatural and error prone are keywords.

Why would it be suboptimal?
Sep 20 2013
prev sibling next sibling parent "Szymon Gatner" <noemail gmail.com> writes:
On Friday, 20 September 2013 at 10:47:52 UTC, Dmitry Olshansky 
wrote:
 20-Sep-2013 14:00, Jacob Carlborg пишет:
 On 2013-09-20 11:37, Szymon Gatner wrote:

 If only std algorithms took containers (by that I mean things 
 that
 container for accepts too) as arguments (and not 
 iterators)... even in
 the form of new functions like foreach_all, transform_all etc.

Can't a container be a range as well?

For Christ sake no, no and no. For starters range skips/drops elements when iterating, and thusly iteration has its own state that is useless for a container otherwise.

Iteration is a stateful process, ranges are related to the process of iteration not to containers. As you say state is useless for containers but is necessary to iteration and its context.
Sep 20 2013
prev sibling next sibling parent Nick Sabalausky <SeeWebsiteToContactMe semitwist.com> writes:
On Fri, 20 Sep 2013 14:47:38 +0400
Dmitry Olshansky <dmitry.olsh gmail.com> wrote:
 For Christ sake no, no and no. For starters range skips/drops
 elements when iterating, and thusly iteration has its own state that
 is useless for a container otherwise.
 
 The idea of "contange" spreads like virus, no matter how abominable
 the end result is. The fact that slices sometimes look like
 containers (BTW ill-suited for anything beyond primitives/PODish
 stuff )  must be the corner stone of this belief. Strengthened on the
 motto of trying to make user defined types to mimic behavior of
 built-ins it leads to a school of thought that it's fine to blend
 ownership and access.

I can vouch that ease of conflating array/slice/range, while normally a wonderful convenience, has indeed led me into confusion when trying to make a custom type range-compatible. I felt naturally inclined to add the range primitives to the type itself, but when that led to issues like you described (especially the fact that a range *consumes* its elements while iterating, plus the inability to provide alternate views of the same data - which is a very powerful tool IMO), I finally started to grok that a container needs to *provide* a range, and not actually *be* one. Well, except maybe for output ranges. (I think?)
 
 TL;DR: Suboptimal, unnatural and error prone are keywords.
 

They are? Cool! auto foo(T)(real a, unnatural b, lazy Suboptimal!T opts) {...} Looks fun! :)
Sep 20 2013
prev sibling next sibling parent "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Fri, Sep 20, 2013 at 09:14:28AM +0200, PauloPinto wrote:
 On Thursday, 19 September 2013 at 23:50:04 UTC, H. S. Teoh wrote:

 I dislike C, and will take C++ safety and abstraction capabilities
 over C, unless forced to do otherwise.

C++ does have some parts that improve over C, it's true. But it's also a minefield filled with pitfalls. Due to its insistence with C compatibility, it basically can only paper over C's flaws, but push hard enough and you're floundering in C problems once more. Well, actually, don't even push -- write straightforward code, and it's almost always wrong. And of the exponential number of ways to write *non*-straightforward code, only one is correct (if that even exists in C++ -- maybe I should rather say, only one is least problematic, 'cos they all are). It's unfortunate that due to C++ basically giving you a gun that can shoot you in the foot backwards while you're aiming at something else (all convniently abstracted away behind wrappers so you won't even notice the bleeding), it's just sooo easy to abuse. As I said many times before, at my job they migrated from C++ back to C, because, for all of its flaws, C at least has a well-understood core and well-known ways of managing its risky parts. The C++ codebase we used to have was completely unmaintainable because it just combines so many C++ features in the worst possible ways -- something inevitable when it has passed through so many hands. Both C and C++ require a lot of coding by convention and extreme attention to detail, but at least in C, mistakes tend to be noticed rather quickly, whereas in C++ you could be coding for months, years, before you even notice anything wrong. And by then, it's too late to fix it because half the codebase is already written in the "wrong" way. (And there are just too many wrong ways to write C++ code.) It's such a refreshing change whenever I get to work with D in my free time. D does have its own warts and problems, it's true, but it's a world of a difference from C/C++. It's like the difference between being pricked by a needle every now and then vs. drinking glass shards. T -- Be in denial for long enough, and one day you'll deny yourself of things you wish you hadn't.
Sep 20 2013
prev sibling next sibling parent "Wyatt" <wyatt.epp gmail.com> writes:
On Friday, 20 September 2013 at 13:00:49 UTC, H. S. Teoh wrote:
 [...]
 It's unfortunate that due to C++ basically giving you a gun 
 that can
 shoot you in the foot backwards while you're aiming at 
 something else
 (all convniently abstracted away behind wrappers so you won't 
 even
 notice the bleeding)

This seems apt: "C lets you shoot yourself in the foot. C++ lets you std::anatomy<limb>(std::dim::Directions.DOWN, std::anatomy::digits<5>).shoot(this)" -- eevee -Wyatt
Sep 20 2013
prev sibling next sibling parent "PauloPinto" <pjmlp progtools.org> writes:
On Friday, 20 September 2013 at 13:00:49 UTC, H. S. Teoh wrote:
 On Fri, Sep 20, 2013 at 09:14:28AM +0200, PauloPinto wrote:
 On Thursday, 19 September 2013 at 23:50:04 UTC, H. S. Teoh 
 wrote:

 I dislike C, and will take C++ safety and abstraction 
 capabilities
 over C, unless forced to do otherwise.

C++ does have some parts that improve over C, it's true. But it's also a minefield filled with pitfalls. Due to its insistence with C compatibility, it basically can only paper over C's flaws, but push hard enough and you're floundering in C problems once more. Well, actually, don't even push -- write straightforward code, and it's almost always wrong. And of the exponential number of ways to write *non*-straightforward code, only one is correct (if that even exists in C++ -- maybe I should rather say, only one is least problematic, 'cos they all are). It's unfortunate that due to C++ basically giving you a gun that can shoot you in the foot backwards while you're aiming at something else (all convniently abstracted away behind wrappers so you won't even notice the bleeding), it's just sooo easy to abuse. As I said many times before, at my job they migrated from C++ back to C, because, for all of its flaws, C at least has a well-understood core and well-known ways of managing its risky parts. The C++ codebase we used to have was completely unmaintainable because it just combines so many C++ features in the worst possible ways -- something inevitable when it has passed through so many hands. Both C and C++ require a lot of coding by convention and extreme attention to detail, but at least in C, mistakes tend to be noticed rather quickly, whereas in C++ you could be coding for months, years, before you even notice anything wrong. And by then, it's too late to fix it because half the codebase is already written in the "wrong" way. (And there are just too many wrong ways to write C++ code.) It's such a refreshing change whenever I get to work with D in my free time. D does have its own warts and problems, it's true, but it's a world of a difference from C/C++. It's like the difference between being pricked by a needle every now and then vs. drinking glass shards. T

I understand you situation. In my case, when I met C, I already knew Turbo Pascal, so C seemed a bit of stone age due to string handling, no modules, unsafe by default, no reference types, no OO. C++ gave me some of the Turbo Pascal comfort back, together with C compatibility and portability. Although writing portable C and C++ code in the 90's, meant lots of #ifdefs. Anyway, as I mentioned on my previous post, both languages hardly play any role in the type of work my employer targets. I guess it is always a matter of how we got to learn our tools, and personal experience. -- Paulo
Sep 20 2013
prev sibling next sibling parent "Szymon Gatner" <noemail gmail.com> writes:
On Friday, 20 September 2013 at 13:32:47 UTC, Dmitry Olshansky 
wrote:
 20-Sep-2013 15:01, Szymon Gatner пишет:
 On Friday, 20 September 2013 at 10:47:52 UTC, Dmitry Olshansky 
 wrote:
 20-Sep-2013 14:00, Jacob Carlborg пишет:
 On 2013-09-20 11:37, Szymon Gatner wrote:

 If only std algorithms took containers (by that I mean 
 things that
 container for accepts too) as arguments (and not 
 iterators)... even in
 the form of new functions like foreach_all, transform_all 
 etc.

Can't a container be a range as well?

For Christ sake no, no and no. For starters range skips/drops elements when iterating, and thusly iteration has its own state that is useless for a container otherwise.

Iteration is a stateful process, ranges are related to the process of iteration not to containers. As you say state is useless for containers but is necessary to iteration and its context.

A text-book example of self-destruction(?). Ranges (in particular Input/Forward) are not much above encapsulation of iteration, hence must contain that state required to iterate said elements. Which leads to the point that indeed containers have no business being ranges by themselves. The bottom line is: sort(container[]); vs sort(container); Where I hardly see how coupling containers with algorithms can bring even slightest benefit.

OK so it seems we agree. I never said that containers should be ranges. Ranges are abstraction on iterators and that is it. Single container can have multiple ranges existing at the same time. State is attached to ranges not containers. And most importantly ranges are mutable always, even if underlying container isn't. Ranges are meant to have state. No idea what you mean by self destruction.
Sep 20 2013
prev sibling next sibling parent "Szymon Gatner" <noemail gmail.com> writes:
On Friday, 20 September 2013 at 14:04:06 UTC, Dmitry Olshansky 
wrote:
 20-Sep-2013 17:50, Szymon Gatner пишет:
 On Friday, 20 September 2013 at 13:32:47 UTC, Dmitry Olshansky 
 wrote:
 20-Sep-2013 15:01, Szymon Gatner пишет:
 On Friday, 20 September 2013 at 10:47:52 UTC, Dmitry 
 Olshansky wrote:



 A text-book example of self-destruction(?).
 Ranges (in particular Input/Forward) are not much above 
 encapsulation
 of iteration, hence must contain that state required to 
 iterate said
 elements. Which leads to the point that indeed containers 
 have no
 business being ranges by themselves.

 The bottom line is:
 sort(container[]);
 vs
 sort(container);

 Where I hardly see how coupling containers with algorithms 
 can bring
 even slightest benefit.

OK so it seems we agree. I never said that containers should be ranges. Ranges are abstraction on iterators and that is it. Single container can have multiple ranges existing at the same time. State is attached to ranges not containers. And most importantly ranges are mutable always, even if underlying container isn't. Ranges are meant to have state. No idea what you mean by self destruction.

Then it may be a misunderstanding on my part. I was referring to your previous reply. Where I basically said:
 Can't a container be a range as well?


For Christ sake no, no and no.

[... Because that would be ...]
 TL;DR: Suboptimal, unnatural and error prone are keywords.

Then your question - Why would it be suboptimal? Which your second reply seem to clearly explain: extra state placed where it doesn't belong. I can't easily correlate your two answers as they look as if the second one answers questions of the first. Anyhow we are in agreement here.

Mind that "Can't a container be a range as well?" was not from me. Still, moving computation over a range from for loop body into the range implementation (like filtering) incurs no performance penalty and has additional benefits of composability and maintainability. Do you mean that is suboptimal?
Sep 20 2013
prev sibling next sibling parent "renoX" <renozyx gmail.com> writes:
On Friday, 20 September 2013 at 09:43:10 UTC, bearophile wrote:
[cut]
Another thing to notice is that Haskell user defined operators
 sometimes hurt my ability to read code. This is composed with 
 the precedent problem.

++ I remember that reading a tutorial about Elm felt much more easy to read than other text based on Haskell because Elm's author took great care of choosing readable operators.. That said, he made the same mistake as Haskell's authors: currying is a *mathematical detail* which shouldn't obscure function type: 'f: a->b->c' is less readable than 'f: a,b->c'. renoX
Sep 20 2013
prev sibling next sibling parent "eles" <eles eles.com> writes:
On Friday, 20 September 2013 at 13:00:49 UTC, H. S. Teoh wrote:
 On Fri, Sep 20, 2013 at 09:14:28AM +0200, PauloPinto wrote:
 On Thursday, 19 September 2013 at 23:50:04 UTC, H. S. Teoh 
 wrote:

tend to be noticed rather quickly, whereas in C++ you could be coding for months, years, before you even notice anything wrong. And by then, it's too late to fix it because half the codebase is already written in the "wrong" way.

I think that, even more than the hidden code execution (constructors, operators) and unpredictable code paths (exceptions), this is the strongest argument among those Linus made in its rant against C++.
Sep 20 2013
prev sibling next sibling parent "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Fri, Sep 20, 2013 at 05:36:59PM +0400, Dmitry Olshansky wrote:
 20-Sep-2013 14:55, Szymon Gatner пишет:
On Friday, 20 September 2013 at 10:47:52 UTC, Dmitry Olshansky wrote:
20-Sep-2013 14:00, Jacob Carlborg пишет:
On 2013-09-20 11:37, Szymon Gatner wrote:

If only std algorithms took containers (by that I mean things that
container for accepts too) as arguments (and not iterators)...
even in the form of new functions like foreach_all, transform_all
etc.

Can't a container be a range as well?




A container should not be confused with a range. That way leads to dragons. :-P (It's rather unfortunate that built-in arrays conflate the two, it leads to a lot of wrong code that works only with arrays but not with "real" ranges.)
For Christ sake no, no and no. For starters range skips/drops
elements when iterating, and thusly iteration has its own state that
is useless for a container otherwise.

That would be a filtering range which adds additional logic that costs exactly the same as an if() statement inside a for loop when filtering on condition manually.

Better examples are traversing e.g. binary-tree depth-first in-order, or post-order and that would require state. Again it's easy to miss by looking at built-in arrays.

Traversing the tree at all requires state, whether or not you do it with ranges. You either put the state on the runtime stack (recursion), or you put the state on the heap in the range adaptor. Same thing.
TL;DR: Suboptimal, unnatural and error prone are keywords.

Why would it be suboptimal?

If said tree needs to be a range I would be horrified to see how it manges to be at the same time a range that (for the sake of example) traverses said tree depth-first in-order. Not even talking about trees having no one "natural" range over them.

Some trees do, like document trees. But yeah, in general, you don't have a natural ordering to trees. But then again, the whole point of ranges is *linear* traversal. If you're not doing that, then it's probably the wrong abstraction to use. (FWIW, while SortedRange is a neat hack, I haven't actually found it that useful in practice. Most of the ranges I actually use are for the cases where it's an input/forward range and you don't want to store the whole thing, so random access is impractical. The cases where I actually have random access usually turn out to be plain ole arrays anyway, so the SortedRange abstraction isn't really necessary.) So for trees, I'd argue that you need a tree-specific iteration abstraction, not ranges, which are linear. It's a clear case of structure conflict. ;-) T -- Real Programmers use "cat > a.out".
Sep 20 2013
prev sibling next sibling parent "Craig Dillabaugh" <craig.dillabaugh gmail.com> writes:
On Friday, 20 September 2013 at 14:49:21 UTC, H. S. Teoh wrote:

snip
 [...]

 Some trees do, like document trees. But yeah, in general, you 
 don't have
 a natural ordering to trees.

 But then again, the whole point of ranges is *linear* 
 traversal. If
 you're not doing that, then it's probably the wrong abstraction 
 to use.
 (FWIW, while SortedRange is a neat hack, I haven't actually 
 found it
 that useful in practice. Most of the ranges I actually use are 
 for the
 cases where it's an input/forward range and you don't want to 
 store the
 whole thing, so random access is impractical. The cases where I 
 actually
 have random access usually turn out to be plain ole arrays 
 anyway, so
 the SortedRange abstraction isn't really necessary.)

 So for trees, I'd argue that you need a tree-specific iteration
 abstraction, not ranges, which are linear. It's a clear case of
 structure conflict. ;-)


 T

Excuse my lack of knowledge on Ranges, so maybe what I am proposing is infeasible, for a binary tree for example couldn't you have a InOrderRange, PreOrderRange, and PostOrderRange or alternately DepthFirstRange, and BreadFirstRange. From a consumers view those would be linear operations? Craig
Sep 20 2013
prev sibling next sibling parent "Craig Dillabaugh" <craig.dillabaugh gmail.com> writes:
On Friday, 20 September 2013 at 14:55:41 UTC, Craig Dillabaugh 
wrote:
clip
 Excuse my lack of knowledge on Ranges, so maybe what I am 
 proposing is infeasible, for a binary tree for example couldn't 
 you have a InOrderRange, PreOrderRange, and PostOrderRange or 
 alternately DepthFirstRange, and BreadFirstRange.  From a 
 consumers view those would be linear operations?

 Craig

Please also excuse my lack of proper punctuation :o)
Sep 20 2013
prev sibling next sibling parent "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Fri, Sep 20, 2013 at 04:55:40PM +0200, Craig Dillabaugh wrote:
 On Friday, 20 September 2013 at 14:49:21 UTC, H. S. Teoh wrote:
 
 snip
[...]

Some trees do, like document trees. But yeah, in general, you don't
have a natural ordering to trees.

But then again, the whole point of ranges is *linear* traversal.  If
you're not doing that, then it's probably the wrong abstraction to
use.  (FWIW, while SortedRange is a neat hack, I haven't actually
found it that useful in practice. Most of the ranges I actually use
are for the cases where it's an input/forward range and you don't
want to store the whole thing, so random access is impractical. The
cases where I actually have random access usually turn out to be
plain ole arrays anyway, so the SortedRange abstraction isn't really
necessary.)

So for trees, I'd argue that you need a tree-specific iteration
abstraction, not ranges, which are linear. It's a clear case of
structure conflict. ;-)


T

Excuse my lack of knowledge on Ranges, so maybe what I am proposing is infeasible, for a binary tree for example couldn't you have a InOrderRange, PreOrderRange, and PostOrderRange or alternately DepthFirstRange, and BreadFirstRange. From a consumers view those would be linear operations?

Well, of course you can have these ranges. But they have to be separate from the container. And they won't be able to take advantage of the tree structure, for example, prune the children of the current node from the iteration. To do that with a range you'd have to skip over all the descendent nodes manually, which is tedious and also a waste of time. Incidentally, I did recently write a PreorderRange API for my own code, which provides an additional method called pruneFront() that efficiently skips over descendent nodes. In retrospect, though, I'm questioning the value of doing this, since that makes assumptions about the range that require too much knowledge about the underlying container, which makes the abstraction nothing more than mere formality. I might as well come out and say "this is a tree" and work with a full-fledged tree abstraction rather than try to shoehorn it into a linear range API. T -- Why is it that all of the instruments seeking intelligent life in the universe are pointed away from Earth? -- Michael Beibl
Sep 20 2013
prev sibling next sibling parent "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Fri, Sep 20, 2013 at 05:13:45PM +0200, Joseph Rushton Wakeling wrote:
 On 20/09/13 16:48, H. S. Teoh wrote:
A container should not be confused with a range. That way leads to
dragons. :-P  (It's rather unfortunate that built-in arrays conflate
the two, it leads to a lot of wrong code that works only with arrays
but not with "real" ranges.)

Built-in arrays are not _always_ ranges. Consider const(int[]) ... as I found out recently, it's _not_ a range, because you can't popFront on a const entity.

Which makes it even more confusing, since newbies would probably equate arrays (the container) with ranges from their frequent use in range examples. Perhaps it's more useful to think of T[] not as an array per se, but as a *slice* of the underlying array data which is managed by druntime. I think I'm OK with saying that arrays (i.e. the underlying data) are containers, while slices (what we think of today as "arrays") are ranges. Of course, the distinction isn't that clear cut, because appending to a "slice" creates a new array and returns a slice of that, so there's still some murky areas here. T -- Life would be easier if I had the source code. -- YHL
Sep 20 2013
prev sibling next sibling parent "Szymon Gatner" <noemail gmail.com> writes:
On Friday, 20 September 2013 at 15:09:57 UTC, Dmitry Olshansky 
wrote:
 20-Sep-2013 18:13, Szymon Gatner пишет:
 On Friday, 20 September 2013 at 14:04:06 UTC, Dmitry Olshansky 
 wrote:

 Can't a container be a range as well?


For Christ sake no, no and no.

[... Because that would be ...]
 TL;DR: Suboptimal, unnatural and error prone are keywords.

Then your question - Why would it be suboptimal? Which your second reply seem to clearly explain: extra state placed where it doesn't belong. I can't easily correlate your two answers as they look as if the second one answers questions of the first. Anyhow we are in agreement here.

Mind that "Can't a container be a range as well?" was not from me.

Yup, but it was what I was answering ... when you asked about why would it be suboptimal... A container that is a range at the same time is suboptimal, that's all.
 Still, moving computation over a range from for loop body into 
 the range
 implementation (like filtering) incurs no performance penalty 
 and has
 additional benefits of composability and maintainability. Do 
 you mean
 that is suboptimal?

I made no statement on this I think. It shouldn't be ineffective. The only problem with it that I foresee is that in order to take advantage of vectorized SIMD in CPU it could be harder to auto-magically vectorize such code for compiler. For that matter I think e.g. providing explicitly a range of float4 fells like a better way to go.

Yup, we agree then :) I think I add to confusion also because I think about C++ and not D. I didn't realize that arrays/slices are ranges in D. That is not good indeed.
Sep 20 2013
prev sibling next sibling parent "Szymon Gatner" <noemail gmail.com> writes:
On Friday, 20 September 2013 at 16:38:29 UTC, Andrei Alexandrescu 
wrote:
 On 9/20/13 8:13 AM, Joseph Rushton Wakeling wrote:
 On 20/09/13 16:48, H. S. Teoh wrote:
 A container should not be confused with a range. That way 
 leads to
 dragons. :-P  (It's rather unfortunate that built-in arrays 
 conflate the
 two, it leads to a lot of wrong code that works only with 
 arrays but not
 with "real" ranges.)

Built-in arrays are not _always_ ranges. Consider const(int[]) ... as I found out recently, it's _not_ a range, because you can't popFront on a const entity.

Well you can't call popFront on any const range.

What is even the purpose of const ranges? It makes little sense imho. Why would GoF iterator be immutable? It is like trying to iterate over collection bu indexing but only having const int as index variable. If anything it only make sense to have ranges to const (abstraction over const_iterator).
Sep 20 2013
prev sibling next sibling parent Jonathan M Davis <jmdavisProg gmx.com> writes:
On Friday, September 20, 2013 18:46:34 Szymon Gatner wrote:
 On Friday, 20 September 2013 at 16:38:29 UTC, Andrei Alexandrescu
 
 wrote:
 On 9/20/13 8:13 AM, Joseph Rushton Wakeling wrote:
 On 20/09/13 16:48, H. S. Teoh wrote:
 A container should not be confused with a range. That way
 leads to
 dragons. :-P  (It's rather unfortunate that built-in arrays
 conflate the
 two, it leads to a lot of wrong code that works only with
 arrays but not
 with "real" ranges.)

Built-in arrays are not _always_ ranges. Consider const(int[]) ... as I found out recently, it's _not_ a range, because you can't popFront on a const entity.

Well you can't call popFront on any const range.

What is even the purpose of const ranges? It makes little sense imho. Why would GoF iterator be immutable? It is like trying to iterate over collection bu indexing but only having const int as index variable. If anything it only make sense to have ranges to const (abstraction over const_iterator).

If an object is const, then all of its members are const, which means that any ranges you get from its members will be const, making such ranges useless. But it still makes sense to iterate over such a range (in most cases, you care about the elements in the range, not the range itself). But since you can't iterate over the range (because it's const), you need to get a tail-const slice of it to iterate over instead. If you can do that, then const ranges aren't a huge problem. The problem is that making it so that you can get a tail-const slice of a const user-defined range is incredibly difficult. The compiler understands arrays, so it works there, but it doesn't work anywhere else. So, at this point, const and ranges don't play nicely at all. - Jonathan M Davis
Sep 20 2013
prev sibling next sibling parent "Szymon Gatner" <noemail gmail.com> writes:
On Friday, 20 September 2013 at 16:57:43 UTC, Jonathan M Davis 
wrote:

 If an object is const, then all of its members are const, which 
 means that any
 ranges you get from its members will be const, making such 
 ranges useless.

That is so weird to hear considering I added ranges to my C++ code and my Vector<T>::all() const can easily return non-const range even tho container is itself const. This kinda looks like D is more limited in that area than C++... Or I really am not getting something.
Sep 20 2013
prev sibling next sibling parent "bearophile" <bearophileHUGS lycos.com> writes:
Piotr Szturmaj:

 Some time ago I proposed a hybrid range concept where the 
 front() is an array. All algorithms would first traverse the 
 memory in this array, then call popFront to get another array 
 (slice). In this way, function call overhead is minimized 
 because it's paid sparsely rather than for each range element.

There are papers on this topic, like: http://lafstern.org/matt/segmented.pdf Bye, bearophile
Sep 20 2013
prev sibling next sibling parent Jonathan M Davis <jmdavisProg gmx.com> writes:
On Friday, September 20, 2013 10:47:15 Andrei Alexandrescu wrote:
 On 9/20/13 10:02 AM, Szymon Gatner wrote:
 On Friday, 20 September 2013 at 16:57:43 UTC, Jonathan M Davis wrote:
 If an object is const, then all of its members are const, which means
 that any
 ranges you get from its members will be const, making such ranges
 useless.

That is so weird to hear considering I added ranges to my C++ code and my Vector<T>::all() const can easily return non-const range even tho container is itself const. This kinda looks like D is more limited in that area than C++... Or I really am not getting something.

Yah, it should be possible for a const container to offer ranges over its (non-modifiable) elements.

That's probably easy to do if the container is well-written, because the container creates the range. The problem is converting a const range to tail const one, which is what you have to do if you ever end up with a const range for any reason, and if you're using const much, that's trivial to do (especially if you ever have a range as a member variable). Unfortunately, in practice, that conversion is quite difficult to do with user-defined types even though it works just fine with arrays. - Jonathan M Davis
Sep 20 2013
prev sibling next sibling parent Jonathan M Davis <jmdavisProg gmx.com> writes:
On Friday, September 20, 2013 19:02:22 Szymon Gatner wrote:
 On Friday, 20 September 2013 at 16:57:43 UTC, Jonathan M Davis
 
 wrote:
 If an object is const, then all of its members are const, which
 means that any
 ranges you get from its members will be const, making such
 ranges useless.

That is so weird to hear considering I added ranges to my C++ code and my Vector<T>::all() const can easily return non-const range even tho container is itself const. This kinda looks like D is more limited in that area than C++... Or I really am not getting something.

The container isn't the problem. It's converting a const range to a tail-const one (e.g. const(Range!int) to Range!(const int). In D, const is transitive, whereas in C++, const only affects the top level. So, if you have class C { auto foo() const { return i; } private int* i; } then foo is going to return a const(int*). Thanks to the fact that const int* is implicitly convertible to const(int)*, you could also return const(int)*, if you chose, but you could never return int*, whereas C++ would let you do that just fine, because in C++ it's only the member variable itself which is considered const, not the whole thing, whereas in D, once an object is const, it and everything it refers to is const when accessed through it. So, in D, if you have a member function that returns a range, the only way that it can return a non-const range is if it's able to convert the fully const range to a tail-const one (i.e. the range itself is non-const but what it refers to is still const). That can easily be done by creating an entirely new range from the original (e.g. convert it to an array and return that), but converting a const range to a tail-const one is very difficult to do with user- defined types. To do that, we need to be able to convert const(Range!Foo) to Range!(const Foo), and thanks to how templates work those are completely different template instantations (meaning that technically, they don't necessarily have anything do with each other - e.g. their internal layout could be completely different), so the compiler can't assume that that conversion will work. You have to write it yourself, which quickly runs into problems with recursive template instantiations and the like. I believe that it can be done with the proper use of static if, but it does mean that you have to know what you're doing. It's not straightforward, and the last time that I attempted it, I failed. This is in stark contrast to arrays, where the compiler knows fulwell that it can convert const(int[]) to const(int)[] without causing any problems. - Jonathan M Davis
Sep 20 2013
prev sibling next sibling parent "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Fri, Sep 20, 2013 at 11:04:10AM -0700, Jonathan M Davis wrote:
[...]
 So, in D, if you have a member function that returns a range, the only
 way that it can return a non-const range is if it's able to convert
 the fully const range to a tail-const one (i.e. the range itself is
 non-const but what it refers to is still const). That can easily be
 done by creating an entirely new range from the original (e.g. convert
 it to an array and return that), but converting a const range to a
 tail-const one is very difficult to do with user- defined types.
 
 To do that, we need to be able to convert const(Range!Foo) to
 Range!(const Foo), and thanks to how templates work those are
 completely different template instantations (meaning that technically,
 they don't necessarily have anything do with each other - e.g. their
 internal layout could be completely different), so the compiler can't
 assume that that conversion will work. You have to write it yourself,
 which quickly runs into problems with recursive template
 instantiations and the like. I believe that it can be done with the
 proper use of static if, but it does mean that you have to know what
 you're doing. It's not straightforward, and the last time that I
 attempted it, I failed.
 
 This is in stark contrast to arrays, where the compiler knows fulwell
 that it can convert const(int[]) to const(int)[] without causing any
 problems.

Yeah, this is one of the areas of D's const system that irks me. It's annoying to deal with even outside of ranges. For example, if you want to traverse the container (from inside a member function), if the member function is const, then all references to the container's internal objects are const: class Tree { Node* root; const(Node)* find() const { // curNode is inferred as const(Node*), i.e., // the pointer itself is const, since root is a // member of const(Tree). auto curNode = root; while (someCond) { // NG: can't modify a const ptr curNode = curNode.left; } else { // NG: can't modify a const ptr curNode = curNode.right; } ... // NG: can't convert const(Node*) to // const(Node)*. return curNode; } } The workaround is to explicitly state the type of curNode as tail-const: const(Node)* curNode = root; ... This is a very simple case, and it's already ugly. Once the code gets more complex, the problem becomes uglier. For example, because inout is treated just like const, inout member functions suffer from the same problem. When you then throw in user-defined types that have const/immutable members, the waters get even murkier. T -- Bare foot: (n.) A device for locating thumb tacks on the floor.
Sep 20 2013
prev sibling next sibling parent Jonathan M Davis <jmdavisProg gmx.com> writes:
On Friday, September 20, 2013 12:29:33 Andrei Alexandrescu wrote:
 On 9/20/13 10:52 AM, Jonathan M Davis wrote:
 On Friday, September 20, 2013 10:47:15 Andrei Alexandrescu wrote:
 On 9/20/13 10:02 AM, Szymon Gatner wrote:
 On Friday, 20 September 2013 at 16:57:43 UTC, Jonathan M Davis wrote:
 If an object is const, then all of its members are const, which means
 that any
 ranges you get from its members will be const, making such ranges
 useless.

That is so weird to hear considering I added ranges to my C++ code and my Vector<T>::all() const can easily return non-const range even tho container is itself const. This kinda looks like D is more limited in that area than C++... Or I really am not getting something.

Yah, it should be possible for a const container to offer ranges over its (non-modifiable) elements.

That's probably easy to do if the container is well-written, because the container creates the range. The problem is converting a const range to tail const one, which is what you have to do if you ever end up with a const range for any reason, and if you're using const much, that's trivial to do (especially if you ever have a range as a member variable). Unfortunately, in practice, that conversion is quite difficult to do with user-defined types even though it works just fine with arrays.

I'm not sure such a conversion is needed all that often.

Every time that there's a member variable which is a range, you end up needing it if you ever need to iterate over the members of that range when the object itself is const - the prime example being when the range is returned from a getter or property function. I think that you tend to be able to avoid it if you're not doing much OO and your code is very functional in nature, but if you've got many objects with data, then you start running into it (probably forcing you to move the contents of the range into an array in many cases). Also, there are plenty of folks who tend to try and use const and/or immutable as much as possible in their code, and when you do that, ranges become useless pretty quickly, whereas if you could easily get a tail-const version of the range when slicing it, ranges can then still be useful in code that uses const and immutable heavily. As it stands, you pretty much have to avoid const or immutable when doing much with ranges thanks to the lack of tail-const conversion for user-defined ranges, and while that certainly isn't a fatal restriction, it's certainly frustrating when const and immutable are supposed to be major advantages of D. I suspect that the main reason that we don't get more complaints about it is because arrays don't have the problem, but the problem does still come up from time to time. - Jonathan M Davis
Sep 20 2013
prev sibling next sibling parent "deadalnix" <deadalnix gmail.com> writes:
On Friday, 20 September 2013 at 16:28:49 UTC, Joseph Rushton 
Wakeling wrote:
 I wouldn't single out DMD for criticism -- I don't know to what 
 extent the underlying reasons overlap, but all the compilers 
 cope less well with range-based material than they might in 
 theory.

 The canonical example would be something like,

     foreach (i; iota(10)) { ... }

 which in theory shouldn't be any slower than,

     foreach (i; 0 .. 10) { ... }

 but in practice is, no matter what the compiler.

Chandler made a talk about missed optimization opportunities in LLVM some time ago visible here http://llvm.org/devmtg/2013-04/videos/carruth-hires.mov You may also want to look at that one for some basics before : http://www.youtube.com/watch?v=eR34r7HOU14 What happen with range is that they do not fit that model properly. Range code often look like a tree (front calling subrange.front, popFront call subrange.popFront, etc . . .). At each inlining, front/popFront and firend become bigger and bigger to the point they aren't good inlining candidate. The compiler do not see that everything cancels out at the end. The other big performance pitfall in LLVM with range are bug in the optimizer when it comes to aggregate store/load. I reported some of these and LLVM team seems willing to get rid of them because a lot of frontend generate them (notably gcc frontend). clang do not use them, so they got less attention in the first place.
Sep 20 2013
prev sibling next sibling parent "deadalnix" <deadalnix gmail.com> writes:
On Friday, 20 September 2013 at 19:29:33 UTC, Andrei Alexandrescu 
wrote:
 I'm not sure such a conversion is needed all that often.

That has been a show stopper for several things I wanted to implement in SDC. We need to be able to do that.
Sep 20 2013
prev sibling next sibling parent "Paul Jurczak" <pauljurczak yahoo.com> writes:
On Friday, 20 September 2013 at 16:28:49 UTC, Joseph Rushton 
Wakeling wrote:
[...]
 The canonical example would be something like,

     foreach (i; iota(10)) { ... }

 which in theory shouldn't be any slower than,

     foreach (i; 0 .. 10) { ... }

 but in practice is, no matter what the compiler.

Some compilers are unexpectedly smart in this case. I just benchmarked somewhat similar code with two version of the same function: one using a straight for loop, the other mixture of iota and "..". Plain for loop should be the fastest, right? Almost right. Here are the functions: int e28_0(int N = 1002) { int diagNumber = 1; int sum = diagNumber; for (int width = 2; width < N; width += 2) for (int j = 0; j < 4; ++j) { diagNumber += width; sum += diagNumber; } return sum; } int e28_1(int N = 1002) { int diagNumber = 1; int sum = diagNumber; foreach (width; iota(2, N, 2)) foreach (_; 0..4) { diagNumber += width; sum += diagNumber; } return sum; } Here are the results: GDC 4.8.1: gdc-4.8 -m64 -march=native -fno-bounds-check -frename-registers -frelease -O3 669171001 830ns e28_0 669171001 830ns e28_1 DMD64 2.063.2: dmd -O -noboundscheck -inline -release 669171001 1115ns e28_0 669171001 1958ns e28_1 LDC 0.11.0: ldmd2 -m64 -O -noboundscheck -inline -release 669171001 454ns e28_0 669171001 395ns e28_1
Sep 21 2013
prev sibling next sibling parent "renoX" <renozyx gmail.com> writes:
On Friday, 20 September 2013 at 15:23:20 UTC, Paulo Pinto wrote:
 Am 20.09.2013 16:24, schrieb renoX:
 That said, he made the same mistake as Haskell's authors: 
 currying is a
 *mathematical detail* which shouldn't obscure function type:
 'f: a->b->c' is less readable than 'f: a,b->c'.

 renoX

That is standard in all languages of the ML family, not only Haskell.

I know, but 'standard' doesn't mean that it is a good idea.. renoX
Sep 23 2013
prev sibling next sibling parent "PauloPinto" <pjmlp progtools.org> writes:
On Monday, 23 September 2013 at 12:08:48 UTC, renoX wrote:
 On Friday, 20 September 2013 at 15:23:20 UTC, Paulo Pinto wrote:
 Am 20.09.2013 16:24, schrieb renoX:
 That said, he made the same mistake as Haskell's authors: 
 currying is a
 *mathematical detail* which shouldn't obscure function type:
 'f: a->b->c' is less readable than 'f: a,b->c'.

 renoX

That is standard in all languages of the ML family, not only Haskell.

I know, but 'standard' doesn't mean that it is a good idea.. renoX

I guess it depends on the reader, I always found it quite natural. But then again, I enjoy ML languages since I learned Caml Light. -- Paulo
Sep 23 2013
prev sibling next sibling parent "renoX" <renozyx gmail.com> writes:
On Tuesday, 24 September 2013 at 09:15:37 UTC, Timon Gehr wrote:
 On 09/23/2013 02:08 PM, renoX wrote:
 On Friday, 20 September 2013 at 15:23:20 UTC, Paulo Pinto 
 wrote:
 Am 20.09.2013 16:24, schrieb renoX:
 That said, he made the same mistake as Haskell's authors: 
 currying is a
 *mathematical detail* which shouldn't obscure function type:
 'f: a->b->c' is less readable than 'f: a,b->c'.

 renoX

That is standard in all languages of the ML family, not only Haskell.

I know, but 'standard' doesn't mean that it is a good idea.. renoX

Neither does 'mathematical detail' mean it is obscure, unreadable or a mistake as you seem to imply.

I'm not sure you understood my point: a 'normal' function takes inputS and produce an output, in the notation: a,b->c you can clearly see the inputS and the output with a minimum of 'syntax noise' around them. In the notation a -> b -> c, the 'syntax noise' is bigger (those arrows between the input parameters are much more 'heavy on the eye' than a quote), and what does it bring? Nothing.. It's the notation which makes the function type less readable which I consider a mistake. renoX
Sep 24 2013
prev sibling next sibling parent "Max Samukha" <maxsamukha gmail.com> writes:
On Tuesday, 24 September 2013 at 11:32:18 UTC, renoX wrote:
 On Tuesday, 24 September 2013 at 09:15:37 UTC, Timon Gehr wrote:
 On 09/23/2013 02:08 PM, renoX wrote:
 On Friday, 20 September 2013 at 15:23:20 UTC, Paulo Pinto 
 wrote:
 Am 20.09.2013 16:24, schrieb renoX:
 That said, he made the same mistake as Haskell's authors: 
 currying is a
 *mathematical detail* which shouldn't obscure function type:
 'f: a->b->c' is less readable than 'f: a,b->c'.

 renoX

That is standard in all languages of the ML family, not only Haskell.

I know, but 'standard' doesn't mean that it is a good idea.. renoX

Neither does 'mathematical detail' mean it is obscure, unreadable or a mistake as you seem to imply.

I'm not sure you understood my point: a 'normal' function takes inputS and produce an output, in the notation: a,b->c you can clearly see the inputS and the output with a minimum of 'syntax noise' around them. In the notation a -> b -> c, the 'syntax noise' is bigger (those arrows between the input parameters are much more 'heavy on the eye' than a quote), and what does it bring? Nothing.. It's the notation which makes the function type less readable which I consider a mistake. renoX

A 'normal' function in Haskell takes exactly one object and returns exactly one object. a -> b -> c is actually a -> (b -> c) because -> is right-associative. It's perfectly readable for people in the Haskell subculture. You'll have hard time convincing them otherwise :)
Sep 24 2013
prev sibling next sibling parent "Szymon Gatner" <noemail gmail.com> writes:
On Tuesday, 24 September 2013 at 12:06:22 UTC, Max Samukha wrote:
 A 'normal' function in Haskell takes exactly one object and 
 returns exactly one object. a -> b -> c is actually a -> (b -> 
 c) because -> is right-associative. It's perfectly readable for 
 people in the Haskell subculture. You'll have hard time 
 convincing them otherwise :)

Isn't function application in Haskell left-associative? (I might be confusing terms as I am just learning it)
Sep 24 2013
prev sibling next sibling parent "Max Samukha" <maxsamukha gmail.com> writes:
On Tuesday, 24 September 2013 at 12:09:28 UTC, Szymon Gatner 
wrote:
 On Tuesday, 24 September 2013 at 12:06:22 UTC, Max Samukha 
 wrote:
 A 'normal' function in Haskell takes exactly one object and 
 returns exactly one object. a -> b -> c is actually a -> (b -> 
 c) because -> is right-associative. It's perfectly readable 
 for people in the Haskell subculture. You'll have hard time 
 convincing them otherwise :)

Isn't function application in Haskell left-associative? (I might be confusing terms as I am just learning it)

So am I. Function application is left-associative. (f a b) is the equivalent of ((f a) b). Arrow in the lambda types is right-associative. If you mentally add parens, then things like (a -> b -> c) -> [a] -> [b] -> [c] look less intimidating - (a -> (b -> c)) -> ([a] -> ([b] -> [c])). Well, they don't. But at least you can make some sense of them, sometimes. They definitely don't look more intimidating than D's template instantiations.
Sep 24 2013
prev sibling next sibling parent "deadalnix" <deadalnix gmail.com> writes:
On Tuesday, 24 September 2013 at 11:32:18 UTC, renoX wrote:
 I'm not sure you understood my point: a 'normal' function takes 
 inputS and produce an output, in the notation: a,b->c you can 
 clearly see the inputS and the output with a minimum of 'syntax 
 noise' around them.
 In the notation a -> b -> c, the 'syntax noise' is bigger 
 (those arrows between the input parameters are much more 'heavy 
 on the eye' than a quote), and what does it bring?
 Nothing..

 It's the notation which makes the function type less readable 
 which I consider a mistake.

You are putting artificial barrier here. a -> b -> c is a function that take a as parameter and return a function that take b as parameter and return c. The concept of multiple parameters and stuff like that exists only in your mind. You try to map a concept you have in your mind in the language when it DO NOT exist in the language.
Sep 24 2013
prev sibling next sibling parent "renoX" <renozyx gmail.com> writes:
On Tuesday, 24 September 2013 at 13:04:10 UTC, deadalnix wrote:
 On Tuesday, 24 September 2013 at 11:32:18 UTC, renoX wrote:
 I'm not sure you understood my point: a 'normal' function 
 takes inputS and produce an output, in the notation: a,b->c 
 you can clearly see the inputS and the output with a minimum 
 of 'syntax noise' around them.
 In the notation a -> b -> c, the 'syntax noise' is bigger 
 (those arrows between the input parameters are much more 
 'heavy on the eye' than a quote), and what does it bring?
 Nothing..

 It's the notation which makes the function type less readable 
 which I consider a mistake.

You are putting artificial barrier here. a -> b -> c is a function that take a as parameter and return a function that take b as parameter and return c. The concept of multiple parameters and stuff like that exists only in your mind. You try to map a concept you have in your mind in the language when it DO NOT exist in the language.

A language is not something set in stone! If the design of a language requires unecessary visual noise, I dislike it, this is not as bad as Lisp, but it is still suboptimal. This is not the only 'visual noise' in Haskell: for example Idris replaced '::' by ':', a good change IMHO. renoX
Sep 24 2013
prev sibling next sibling parent "Max Samukha" <maxsamukha gmail.com> writes:
On Tuesday, 24 September 2013 at 13:46:16 UTC, renoX wrote:
 On Tuesday, 24 September 2013 at 13:04:10 UTC, deadalnix wrote:
 On Tuesday, 24 September 2013 at 11:32:18 UTC, renoX wrote:
 I'm not sure you understood my point: a 'normal' function 
 takes inputS and produce an output, in the notation: a,b->c 
 you can clearly see the inputS and the output with a minimum 
 of 'syntax noise' around them.
 In the notation a -> b -> c, the 'syntax noise' is bigger 
 (those arrows between the input parameters are much more 
 'heavy on the eye' than a quote), and what does it bring?
 Nothing..

 It's the notation which makes the function type less readable 
 which I consider a mistake.

You are putting artificial barrier here. a -> b -> c is a function that take a as parameter and return a function that take b as parameter and return c. The concept of multiple parameters and stuff like that exists only in your mind. You try to map a concept you have in your mind in the language when it DO NOT exist in the language.

A language is not something set in stone! If the design of a language requires unecessary visual noise, I dislike it, this is not as bad as Lisp, but it is still suboptimal.

I think that -> is neither unnecessary nor noise. After having played with Haskell for a while, I actually find the syntax of D unnecessarily redundant.
 This is not the only 'visual noise' in Haskell: for example 
 Idris replaced '::' by ':', a good change IMHO.

That is probably because ':' is the list append operator already.
 renoX

Sep 24 2013
prev sibling next sibling parent "renoX" <renozyx gmail.com> writes:
On Tuesday, 24 September 2013 at 14:15:12 UTC, Max Samukha wrote:
[cut]
 I think that -> is neither unnecessary nor noise. After having 
 played with Haskell for a while, I actually find the syntax of 
 D unnecessarily redundant.

Oh, D is hardly a good example for syntax! Better than C++ doesn't say much.. That said, I don't see how one could prefer 'a -> b -> c' over 'a,b -> c' in this case..
 This is not the only 'visual noise' in Haskell: for example 
 Idris replaced '::' by ':', a good change IMHO.

That is probably because ':' is the list append operator already.

Yes, it's the other way round in Idris, but for once I prefer D's operator '~': list appending is common enough that it deserves a proper operator not a doubled one :: like in Idris or other (Scala?). renoX
Sep 24 2013
prev sibling parent "Max Samukha" <maxsamukha gmail.com> writes:
On Tuesday, 24 September 2013 at 14:24:48 UTC, renoX wrote:
 On Tuesday, 24 September 2013 at 14:15:12 UTC, Max Samukha 
 wrote:
 [cut]
 I think that -> is neither unnecessary nor noise. After having 
 played with Haskell for a while, I actually find the syntax of 
 D unnecessarily redundant.

Oh, D is hardly a good example for syntax! Better than C++ doesn't say much..

Ok.
 That said, I don't see how one could prefer 'a -> b -> c' over 
 'a,b -> c' in this case..

In case of Haskell, it is not a matter of preference. The syntax follows from the language semantics and I don't see how it can be different and better at the same time. (a, b) -> c means a function that maps a pair to an object (the same as ((,) a b) -> c). a -> b -> c, parens around b -> c omitted thanks to right associativity, means a function that maps an object to a function. How does a, b -> c fits in this picture?
 This is not the only 'visual noise' in Haskell: for example 
 Idris replaced '::' by ':', a good change IMHO.

That is probably because ':' is the list append operator already.

Yes, it's the other way round in Idris, but for once I prefer D's operator '~': list appending is common enough that it deserves a proper operator not a doubled one :: like in Idris or other (Scala?).

Sadly, I don't know Idris or Scala. Now we are talking about subjective preferences, which is an exercise in futility. :)
 renoX

Sep 24 2013