digitalmars.D - Bartosz Milewski seems to like D more than C++ now :)
- Szymon Gatner (4/4) Sep 19 2013 I had similar thoughts when watching GoingNaive 2013:
- Andrei Alexandrescu (4/8) Sep 19 2013 That I think is sketchy advice.
- Szymon Gatner (9/17) Sep 19 2013 Tbh I am still testing the effects of this change and so far I do
- QAston (17/26) Sep 20 2013 What Sean Parent does in his value semantics talk is basically an
- QAston (2/33) Sep 20 2013 http://dpaste.dzfl.pl/811800a5
- Szymon Gatner (27/58) Sep 20 2013 The struct was only an example, obviously you can use proper
- QAston (5/31) Sep 20 2013 Well, I won't argue about naming, for me when a type is a wrapper
- Szymon Gatner (2/5) Sep 20 2013 I can live with that ;)
- Walter Bright (3/5) Sep 19 2013 http://www.reddit.com/r/programming/comments/1mqpcq/edward_chands_by_bar...
- Walter Bright (3/8) Sep 19 2013 And Bartosz responds in this one:
- H. S. Teoh (65/69) Sep 19 2013 Quote:
- Nick Sabalausky (11/21) Sep 19 2013 Heh, I love the article's title. Although:
- PauloPinto (20/53) Sep 20 2013 I dislike C, and will take C++ safety and abstraction
- H. S. Teoh (35/38) Sep 20 2013 [...]
- Wyatt (6/14) Sep 20 2013 This seems apt:
- PauloPinto (14/75) Sep 20 2013 I understand you situation.
- eles (5/16) Sep 20 2013 I think that, even more than the hidden code execution
- bearophile (19/25) Sep 19 2013 What about Ada and Rust?
- PauloPinto (7/13) Sep 20 2013 Many people in the C family of languages tend to disregard the
- bearophile (7/14) Sep 20 2013 That's an interesting paper, the symmetry between the two forms
- Peter Alexander (7/14) Sep 20 2013 I don't think his advice is to use recursion instead of loops. I
- PauloPinto (11/25) Sep 20 2013 While at the university, I got to read a paper about a HPC
- bearophile (22/36) Sep 20 2013 In my comment I have packed too much in too little space. Let me
- bearophile (10/13) Sep 20 2013 Two other things to notice is that Haskell code seems to use too
- renoX (11/14) Sep 20 2013 ++
- Paulo Pinto (2/15) Sep 20 2013 That is standard in all languages of the ML family, not only Haskell.
- renoX (3/12) Sep 23 2013 I know, but 'standard' doesn't mean that it is a good idea..
- PauloPinto (5/18) Sep 23 2013 I guess it depends on the reader, I always found it quite natural.
- Timon Gehr (3/14) Sep 24 2013 Neither does 'mathematical detail' mean it is obscure, unreadable or a
- renoX (12/31) Sep 24 2013 I'm not sure you understood my point: a 'normal' function takes
- Max Samukha (6/38) Sep 24 2013 A 'normal' function in Haskell takes exactly one object and
- Szymon Gatner (3/8) Sep 24 2013 Isn't function application in Haskell left-associative? (I might
- Max Samukha (11/21) Sep 24 2013 So am I.
- deadalnix (7/17) Sep 24 2013 You are putting artificial barrier here.
- renoX (7/26) Sep 24 2013 A language is not something set in stone! If the design of a
- Max Samukha (5/33) Sep 24 2013 I think that -> is neither unnecessary nor noise. After having
- renoX (11/18) Sep 24 2013 Oh, D is hardly a good example for syntax! Better than C++
- Max Samukha (12/32) Sep 24 2013 In case of Haskell, it is not a matter of preference. The syntax
- Andrei Alexandrescu (3/6) Sep 20 2013 Why would there be a performance loss?
- Joseph Rushton Wakeling (5/6) Sep 20 2013 Depends on the particular case, but my experience is that _in practice_ ...
- Walter Bright (12/18) Sep 20 2013 I know that, at least with dmd's back end, it's because the optimizer wa...
- Joseph Rushton Wakeling (9/20) Sep 20 2013 I wouldn't single out DMD for criticism -- I don't know to what extent t...
- Andrei Alexandrescu (5/33) Sep 20 2013 I think I know how to fix that. I hypothesize it's about using actual
- Joseph Rushton Wakeling (2/14) Sep 20 2013 Excellent, that'll be great to see :-)
- Andrei Alexandrescu (3/19) Sep 20 2013 http://d.puremagic.com/issues/show_bug.cgi?id=11077
- deadalnix (19/28) Sep 20 2013 Chandler made a talk about missed optimization opportunities in
- Paul Jurczak (39/44) Sep 21 2013 On Friday, 20 September 2013 at 16:28:49 UTC, Joseph Rushton
- Andrei Alexandrescu (4/10) Sep 20 2013 With dmd that's indeed the case. But LLVM is good at enregistering small...
- Piotr Szturmaj (12/17) Sep 20 2013 Because range concept underutilize current CPUs. I don't know if all
- bearophile (5/10) Sep 20 2013 There are papers on this topic, like:
- Andrei Alexandrescu (7/25) Sep 20 2013 This is just a supposition. Inlining is automatic and just works subject...
- Piotr Szturmaj (5/32) Sep 20 2013 This was not my point. I know that inlining is automatic, but certainly
- Walter Bright (3/4) Sep 20 2013 The nice thing about modern optimizers is they are indefatigable. They d...
- Szymon Gatner (6/20) Sep 20 2013 If only std algorithms took containers (by that I mean things
- Jacob Carlborg (4/7) Sep 20 2013 Can't a container be a range as well?
- Szymon Gatner (5/12) Sep 20 2013 Not sure what you mean but C++ has no concept of a range.
- Szymon Gatner (17/31) Sep 20 2013 To clarify what I mean by crippled: standard algorithms were
- Szymon Gatner (17/31) Sep 20 2013 To clarify what I mean by crippled: standard algorithms were
- Jacob Carlborg (4/7) Sep 20 2013 Oh, I was thinking about D. Sorry for the confusion.
- Dmitry Olshansky (13/18) Sep 20 2013 For Christ sake no, no and no. For starters range skips/drops elements
- Szymon Gatner (6/21) Sep 20 2013 That would be a filtering range which adds additional logic that
- Dmitry Olshansky (11/31) Sep 20 2013 Filtering is just an adapter.
- H. S. Teoh (25/57) Sep 20 2013 A container should not be confused with a range. That way leads to
- Craig Dillabaugh (8/31) Sep 20 2013 Excuse my lack of knowledge on Ranges, so maybe what I am
- Craig Dillabaugh (4/10) Sep 20 2013 On Friday, 20 September 2013 at 14:55:41 UTC, Craig Dillabaugh
- H. S. Teoh (18/48) Sep 20 2013 [...]
- Andrei Alexandrescu (4/9) Sep 20 2013 The tree object would have functions returning by value each of these
- Dmitry Olshansky (14/70) Sep 20 2013 Then traversing all files in directory in specific order is not
- Joseph Rushton Wakeling (3/7) Sep 20 2013 Built-in arrays are not _always_ ranges. Consider const(int[]) ... as I...
- Andrei Alexandrescu (8/16) Sep 20 2013 Well you can't call popFront on any const range.
- Szymon Gatner (7/24) Sep 20 2013 What is even the purpose of const ranges? It makes little sense
- Jonathan M Davis (12/38) Sep 20 2013 If an object is const, then all of its members are const, which means th...
- Szymon Gatner (7/11) Sep 20 2013 That is so weird to hear considering I added ranges to my C++
- Andrei Alexandrescu (4/15) Sep 20 2013 Yah, it should be possible for a const container to offer ranges over
- Jonathan M Davis (9/23) Sep 20 2013 That's probably easy to do if the container is well-written, because the...
- Andrei Alexandrescu (3/25) Sep 20 2013 I'm not sure such a conversion is needed all that often.
- Jonathan M Davis (22/48) Sep 20 2013 Every time that there's a member variable which is a range, you end up n...
- deadalnix (5/6) Sep 20 2013 That has been a show stopper for several things I wanted to
- Jonathan M Davis (35/48) Sep 20 2013 The container isn't the problem. It's converting a const range to a tail...
- H. S. Teoh (39/62) Sep 20 2013 [...]
- Andrei Alexandrescu (3/21) Sep 20 2013 I'd agree. As an aside, they don't have to make sense.
- H. S. Teoh (15/24) Sep 20 2013 Which makes it even more confusing, since newbies would probably equate
- Joseph Rushton Wakeling (7/15) Sep 20 2013 Yes, it's completely non-obvious. I think the first time I realized the...
- H. S. Teoh (15/32) Sep 20 2013 Sad to say, I encountered a good number of Phobos bugs caused by the
- Joseph Rushton Wakeling (8/19) Sep 24 2013 In my case, I discovered a number of bugs that stemmed from unittests wh...
- Szymon Gatner (6/20) Sep 20 2013 Iteration is a stateful process, ranges are related to the
- Dmitry Olshansky (14/32) Sep 20 2013 A text-book example of self-destruction(?).
- Szymon Gatner (9/50) Sep 20 2013 OK so it seems we agree. I never said that containers should be
- Dmitry Olshansky (12/39) Sep 20 2013 Then it may be a misunderstanding on my part. I was referring to your
- Szymon Gatner (7/57) Sep 20 2013 Mind that "Can't a container be a range as well?" was not from
- Dmitry Olshansky (12/33) Sep 20 2013 Yup, but it was what I was answering ... when you asked about why would
- Szymon Gatner (5/45) Sep 20 2013 Yup, we agree then :) I think I add to confusion also because I
- Nick Sabalausky (15/29) Sep 20 2013 I can vouch that ease of conflating array/slice/range, while normally
- Dmitry Olshansky (22/51) Sep 20 2013 Aye.
- Andrei Alexandrescu (3/22) Sep 20 2013 Agreed. A container is a range factory.
- Andrej Mitrovic (5/6) Sep 20 2013 Yeah. For example, Andrei's talk was very entertaining, but it was
I had similar thoughts when watching GoingNaive 2013: http://bartoszmilewski.com/2013/09/19/edward-chands/ I was more and more scared with every talk and now I am valualizing my polymorphic types a'la Sean Parent
Sep 19 2013
On 9/19/13 3:18 PM, Szymon Gatner wrote:I had similar thoughts when watching GoingNaive 2013: http://bartoszmilewski.com/2013/09/19/edward-chands/Nice piece.I was more and more scared with every talk and now I am valualizing my polymorphic types a'la Sean ParentThat I think is sketchy advice. Andrei
Sep 19 2013
On Thursday, 19 September 2013 at 22:46:09 UTC, Andrei Alexandrescu wrote:On 9/19/13 3:18 PM, Szymon Gatner wrote:Tbh I am still testing the effects of this change and so far I do see psitive effects on the client code but I also see that what was obvious to users before (everyone understands that they should derive from polymorpic base to get polymorphic effect) is now confusing (how can I customize if I can't derive). Please do elaborate on your view. Always interesting to read on design from smarter people :)I had similar thoughts when watching GoingNaive 2013: http://bartoszmilewski.com/2013/09/19/edward-chands/Nice piece.I was more and more scared with every talk and now I am valualizing my polymorphic types a'la Sean ParentThat I think is sketchy advice.
Sep 19 2013
On Thursday, 19 September 2013 at 22:46:09 UTC, Andrei Alexandrescu wrote:On 9/19/13 3:18 PM, Szymon Gatner wrote:What Sean Parent does in his value semantics talk is basically an interface adapter for a struct, wrapped in a struct providing implicit conversions. By using structs by default and adapting them to interfaces as needed you get to have the smallest possible overhead on a single function call - either no virtual dispatch or 1 virtual dispatch. When writing adapters to interfaces you get 2 or more. The other benefit is that approach is reducing dependencies you need to know about when you use the wrapper type - the approach would not work in D due to lack of argument dependent lookup so you need to write the adapter type (for a particular type you want to convert) yourself anyways. And I think it's better that way. In the end the whole thing is just adapter design pattern applied to C++.I had similar thoughts when watching GoingNaive 2013: http://bartoszmilewski.com/2013/09/19/edward-chands/Nice piece.I was more and more scared with every talk and now I am valualizing my polymorphic types a'la Sean ParentThat I think is sketchy advice. Andrei
Sep 20 2013
On Friday, 20 September 2013 at 07:39:47 UTC, QAston wrote:On Thursday, 19 September 2013 at 22:46:09 UTC, Andrei Alexandrescu wrote:http://dpaste.dzfl.pl/811800a5On 9/19/13 3:18 PM, Szymon Gatner wrote:What Sean Parent does in his value semantics talk is basically an interface adapter for a struct, wrapped in a struct providing implicit conversions. By using structs by default and adapting them to interfaces as needed you get to have the smallest possible overhead on a single function call - either no virtual dispatch or 1 virtual dispatch. When writing adapters to interfaces you get 2 or more. The other benefit is that approach is reducing dependencies you need to know about when you use the wrapper type - the approach would not work in D due to lack of argument dependent lookup so you need to write the adapter type (for a particular type you want to convert) yourself anyways. And I think it's better that way. In the end the whole thing is just adapter design pattern applied to C++.I had similar thoughts when watching GoingNaive 2013: http://bartoszmilewski.com/2013/09/19/edward-chands/Nice piece.I was more and more scared with every talk and now I am valualizing my polymorphic types a'la Sean ParentThat I think is sketchy advice. Andrei
Sep 20 2013
On Friday, 20 September 2013 at 07:39:47 UTC, QAston wrote:On Thursday, 19 September 2013 at 22:46:09 UTC, Andrei Alexandrescu wrote:The struct was only an example, obviously you can use proper class implementations with data hiding etc. You will also get at least 1 virtual call because a "concept" is always an abstract class. That being said, I've been using this technique in other places in my code this far too, for example: my C++ ranges (yeah idea stolen from D) are templates so can be composed for free but there are abstractions like InputRange<> which "erase" (this word is stupid btw, no type is erased really) underlying type and provide convenient abstraction in places. I also use this technique for my std containers. Vector is a std::vector with polymorphic allocator, again it holds Allocator<> by value but underlying implementation is (often composition) of allocator templates. In allocator's case I do explicitly want container alloc to be polymorphic type so that my interfaces never depend on allocator type. Difference now for me is that I did sometimes turn value types into polymorphic types using this technique (like with allocators) but it never occurred to me that other wan can be beneficial too.On 9/19/13 3:18 PM, Szymon Gatner wrote:What Sean Parent does in his value semantics talk is basically an interface adapter for a struct, wrapped in a struct providing implicit conversions. By using structs by default and adapting them to interfaces as needed you get to have the smallest possible overhead on a single function call - either no virtual dispatch or 1 virtual dispatch. When writing adapters to interfaces you get 2 or more.I had similar thoughts when watching GoingNaive 2013: http://bartoszmilewski.com/2013/09/19/edward-chands/Nice piece.I was more and more scared with every talk and now I am valualizing my polymorphic types a'la Sean ParentThat I think is sketchy advice. AndreiThe other benefit is that approach is reducing dependencies you need to know about when you use the wrapper type - the approach would not work in D due to lack of argument dependent lookup so you need to write the adapter type (for a particular type you want to convert) yourself anyways. And I think it's better that way. In the end the whole thing is just adapter design pattern applied to C++.Well Adapter is suppose to *change* interface to adapt to client needs so this is a bit of a strech to call it that here, but yeah, this technique called ("external polymorphism" idiom) can be used to adapt . This really just is a (smart)pointer with a full interface of underlying object (which also adds benefit or using "." instead of "->" on target).
Sep 20 2013
On Friday, 20 September 2013 at 08:20:48 UTC, Szymon Gatner wrote:The struct was only an example, obviously you can use proper class implementations with data hiding etc. You will also get at least 1 virtual call because a "concept" is always an abstract class.Direct use of the value type has no indirections.That being said, I've been using this technique in other places in my code this far too, for example: my C++ ranges (yeah idea stolen from D) are templates so can be composed for free but there are abstractions like InputRange<> which "erase" (this word is stupid btw, no type is erased really) underlying type and provide convenient abstraction in places. I also use this technique for my std containers. Vector is a std::vector with polymorphic allocator, again it holds Allocator<> by value but underlying implementation is (often composition) of allocator templates. In allocator's case I do explicitly want container alloc to be polymorphic type so that my interfaces never depend on allocator type. Difference now for me is that I did sometimes turn value types into polymorphic types using this technique (like with allocators) but it never occurred to me that other wan can be beneficial too. Well Adapter is suppose to *change* interface to adapt to client needs so this is a bit of a strech to call it that here, but yeah, this technique called ("external polymorphism" idiom) can be used to adapt . This really just is a (smart)pointer with a full interface of underlying object (which also adds benefit or using "." instead of "->" on target).Well, I won't argue about naming, for me when a type is a wrapper which provides desired interface to that type is an adapter :P. I agree that this idiom is useful and I use it as needed too.
Sep 20 2013
On Friday, 20 September 2013 at 08:41:34 UTC, QAston wrote:Well, I won't argue about naming, for me when a type is a wrapper which provides desired interface to that type is an adapter :P.I can live with that ;)
Sep 20 2013
On 9/19/2013 3:18 PM, Szymon Gatner wrote:I had similar thoughts when watching GoingNaive 2013: http://bartoszmilewski.com/2013/09/19/edward-chands/http://www.reddit.com/r/programming/comments/1mqpcq/edward_chands_by_bartosz_milewski/ https://news.ycombinator.com/item?id=6414162
Sep 19 2013
On 9/19/2013 4:32 PM, Walter Bright wrote:On 9/19/2013 3:18 PM, Szymon Gatner wrote:And Bartosz responds in this one: http://www.reddit.com/r/cpp/comments/1mqhxl/edward_chands_by_bartosz_milewski/I had similar thoughts when watching GoingNaive 2013: http://bartoszmilewski.com/2013/09/19/edward-chands/http://www.reddit.com/r/programming/comments/1mqpcq/edward_chands_by_bartosz_milewski/ https://news.ycombinator.com/item?id=6414162
Sep 19 2013
On Fri, Sep 20, 2013 at 12:18:22AM +0200, Szymon Gatner wrote:I had similar thoughts when watching GoingNaive 2013: http://bartoszmilewski.com/2013/09/19/edward-chands/ I was more and more scared with every talk and now I am valualizing my polymorphic types a'la Sean ParentQuote: There was so much talk about how not to use C++ that it occurred to me that maybe this wasn’t the problem of incompetent programmers, but that straightforward C++ is plain wrong. So if you just learn the primitives of the language and try to use them, you’re doomed. ... [big snippage] ... I can go on and on like this (and I often do!). Do you see the pattern? Every remedy breeds another remedy. It’s no longer just the C subset that should be avoided. Every new language feature or library addition comes with a new series of gotchas. And you know a new feature is badly designed if Scott Meyers has a talk about it. (His latest was about the pitfalls of, you guessed it, move semantics.) This is sooo true. It reflects my experience with C++. Honestly, it got to a point where I gave up trying to following the remedy upon the patch to another remedy to a third remedy that patches yet another remedy on top of a fundamentally broken core. I just adopt my own C++ coding style and stuck with it. Unfortunately, that approach is unworkable in real-life projects involving more than one programmer. At work, I dread every single time I need to look at the C++ module (which fortunately has been confined to a single module, although it's also one of the largest). For "performance reasons" they eschewed the built-in C++ try/catch constructs, and implemented their own replacements using preprocessor macros. You bet there are memory leaks, pointer bugs, and all sorts of nasty things just from this one "optimization" alone. And it just goes downhill from there. It's this endless cycle of a remedy upon a remedy upon a patch to a remedy that drove me to look for something better. I found D. :) One of the outstanding features of D to me is that code written with simple language constructs are, surprisingly, actually correct. As opposed to C++'s situation of code being wrong by default until you learn the 8th circle black belt advanced level C++ coding techniques. Anyway, this bit sounds interesting: It’s a common but false belief that reference counting (using shared pointers in particular) is better than garbage collection. There is actual research showing that the two approaches are just two sides of the same coin. You should realize that deleting a shared pointer may lead to an arbitrary long pause in program execution, with similar performance characteristics as a garbage sweep. It’s not only because every serious reference counting algorithm must be able to deal with cycles, but also because every time a reference count goes to zero on a piece of data a whole graph of pointers reachable from that object has to be traversed. A data structure built with shared pointers might take a long time to delete and, except for simple cases, you’ll never know which shared pointer will go out of scope last and trigger it. Sounds like D's decision to go with a GC may not be *that* bad after all... Let’s take a great leap of faith and assume that all these things will be standardized and implemented by, say, 2015. Even if that happens, I still don’t think people will be able to use C++ for mainstream parallel programming. C++ has been designed for single thread programming, and parallel programming requires a revolutionary rather than evolutionary change. Two words: data races. Imperative languages offer no protection against data races — maybe with the exception of D. Welp, time to get our act together and clean up that mess that is 'shared', so that D will actually stand a chance of lasting past the next 10 years... ;-) T -- Ruby is essentially Perl minus Wall.
Sep 19 2013
On Thu, 19 Sep 2013 16:48:43 -0700 "H. S. Teoh" <hsteoh quickfur.ath.cx> wrote:On Fri, Sep 20, 2013 at 12:18:22AM +0200, Szymon Gatner wrote:Heh, I love the article's title. Although: "[Edward Scissorhands is] a darker version of Pinocchio, shot in suburban settings." I saw Edward Scissorhands as being a re-telling of Frankenstein: A grotesque, but kind, man-made creature is ostracized by the frightened villagers to ultimately tragic results. Although it could be argued that Pinocchio is a lighter re-imagining of Frankenstein anyway.I had similar thoughts when watching GoingNaive 2013: http://bartoszmilewski.com/2013/09/19/edward-chands/ I was more and more scared with every talk and now I am valualizing my polymorphic types a'la Sean ParentIt's this endless cycle of a remedy upon a remedy upon a patch to a remedy that drove me to look for something better. I found D. :)So true.
Sep 19 2013
On Thursday, 19 September 2013 at 23:50:04 UTC, H. S. Teoh wrote:On Fri, Sep 20, 2013 at 12:18:22AM +0200, Szymon Gatner wrote:I dislike C, and will take C++ safety and abstraction capabilities over C, unless forced to do otherwise. Now, having said this. I hardly write any C++ nowadays. In the types of projects we do, it is all about JVM and .NET languages. Sometimes even replacing "legacy C++" systems by new systems done in those languages. So writing C++, or even C, tends to be restricted to a few method calls. For example, recently we had a project for real time data analysis on Windows. interfaces and SIMD optimizations for a few algorithms.I had similar thoughts when watching GoingNaive 2013: http://bartoszmilewski.com/2013/09/19/edward-chands/ I was more and more scared with every talk and now I am valualizing my polymorphic types a'la Sean ParentQuote: There was so much talk about how not to use C++ that it occurred to me that maybe this wasn’t the problem of incompetent programmers, but that straightforward C++ is plain wrong. So if you just learn the primitives of the language and try to use them, you’re doomed. ... [big snippage] ... I can go on and on like this (and I often do!). Do you see the pattern? Every remedy breeds another remedy. It’s no longer just the C subset that should be avoided. Every new language feature or library addition comes with a new series of gotchas. And you know a new feature is badly designed if Scott Meyers has a talk about it. (His latest was about the pitfalls of, you guessed it, move semantics.) This is sooo true. It reflects my experience with C++. Honestly, it got to a point where I gave up trying to following the remedy upon the patch to another remedy to a third remedy that patches yet another remedy on top of a fundamentally broken core. [... cutted]Sounds like D's decision to go with a GC may not be *that* bad after all...I like GC enabled systems programming languages since I used Oberon, and had some contact with Modula-3. Like many things in programming, the only way to convince other developers is to have them use such systems. -- Paulo
Sep 20 2013
On Fri, Sep 20, 2013 at 09:14:28AM +0200, PauloPinto wrote:On Thursday, 19 September 2013 at 23:50:04 UTC, H. S. Teoh wrote:[...]I dislike C, and will take C++ safety and abstraction capabilities over C, unless forced to do otherwise.[...] C++ does have some parts that improve over C, it's true. But it's also a minefield filled with pitfalls. Due to its insistence with C compatibility, it basically can only paper over C's flaws, but push hard enough and you're floundering in C problems once more. Well, actually, don't even push -- write straightforward code, and it's almost always wrong. And of the exponential number of ways to write *non*-straightforward code, only one is correct (if that even exists in C++ -- maybe I should rather say, only one is least problematic, 'cos they all are). It's unfortunate that due to C++ basically giving you a gun that can shoot you in the foot backwards while you're aiming at something else (all convniently abstracted away behind wrappers so you won't even notice the bleeding), it's just sooo easy to abuse. As I said many times before, at my job they migrated from C++ back to C, because, for all of its flaws, C at least has a well-understood core and well-known ways of managing its risky parts. The C++ codebase we used to have was completely unmaintainable because it just combines so many C++ features in the worst possible ways -- something inevitable when it has passed through so many hands. Both C and C++ require a lot of coding by convention and extreme attention to detail, but at least in C, mistakes tend to be noticed rather quickly, whereas in C++ you could be coding for months, years, before you even notice anything wrong. And by then, it's too late to fix it because half the codebase is already written in the "wrong" way. (And there are just too many wrong ways to write C++ code.) It's such a refreshing change whenever I get to work with D in my free time. D does have its own warts and problems, it's true, but it's a world of a difference from C/C++. It's like the difference between being pricked by a needle every now and then vs. drinking glass shards. T -- Be in denial for long enough, and one day you'll deny yourself of things you wish you hadn't.
Sep 20 2013
On Friday, 20 September 2013 at 13:00:49 UTC, H. S. Teoh wrote:[...] It's unfortunate that due to C++ basically giving you a gun that can shoot you in the foot backwards while you're aiming at something else (all convniently abstracted away behind wrappers so you won't even notice the bleeding)This seems apt: "C lets you shoot yourself in the foot. C++ lets you std::anatomy<limb>(std::dim::Directions.DOWN, std::anatomy::digits<5>).shoot(this)" -- eevee -Wyatt
Sep 20 2013
On Friday, 20 September 2013 at 13:00:49 UTC, H. S. Teoh wrote:On Fri, Sep 20, 2013 at 09:14:28AM +0200, PauloPinto wrote:I understand you situation. In my case, when I met C, I already knew Turbo Pascal, so C seemed a bit of stone age due to string handling, no modules, unsafe by default, no reference types, no OO. C++ gave me some of the Turbo Pascal comfort back, together with C compatibility and portability. Although writing portable C and C++ code in the 90's, meant lots of #ifdefs. Anyway, as I mentioned on my previous post, both languages hardly play any role in the type of work my employer targets. I guess it is always a matter of how we got to learn our tools, and personal experience. -- PauloOn Thursday, 19 September 2013 at 23:50:04 UTC, H. S. Teoh wrote:[...]I dislike C, and will take C++ safety and abstraction capabilities over C, unless forced to do otherwise.[...] C++ does have some parts that improve over C, it's true. But it's also a minefield filled with pitfalls. Due to its insistence with C compatibility, it basically can only paper over C's flaws, but push hard enough and you're floundering in C problems once more. Well, actually, don't even push -- write straightforward code, and it's almost always wrong. And of the exponential number of ways to write *non*-straightforward code, only one is correct (if that even exists in C++ -- maybe I should rather say, only one is least problematic, 'cos they all are). It's unfortunate that due to C++ basically giving you a gun that can shoot you in the foot backwards while you're aiming at something else (all convniently abstracted away behind wrappers so you won't even notice the bleeding), it's just sooo easy to abuse. As I said many times before, at my job they migrated from C++ back to C, because, for all of its flaws, C at least has a well-understood core and well-known ways of managing its risky parts. The C++ codebase we used to have was completely unmaintainable because it just combines so many C++ features in the worst possible ways -- something inevitable when it has passed through so many hands. Both C and C++ require a lot of coding by convention and extreme attention to detail, but at least in C, mistakes tend to be noticed rather quickly, whereas in C++ you could be coding for months, years, before you even notice anything wrong. And by then, it's too late to fix it because half the codebase is already written in the "wrong" way. (And there are just too many wrong ways to write C++ code.) It's such a refreshing change whenever I get to work with D in my free time. D does have its own warts and problems, it's true, but it's a world of a difference from C/C++. It's like the difference between being pricked by a needle every now and then vs. drinking glass shards. T
Sep 20 2013
On Friday, 20 September 2013 at 13:00:49 UTC, H. S. Teoh wrote:On Fri, Sep 20, 2013 at 09:14:28AM +0200, PauloPinto wrote:I think that, even more than the hidden code execution (constructors, operators) and unpredictable code paths (exceptions), this is the strongest argument among those Linus made in its rant against C++.On Thursday, 19 September 2013 at 23:50:04 UTC, H. S. Teoh wrote:but at least in C, mistakes tend to be noticed rather quickly, whereas in C++ you could be coding for months, years, before you even notice anything wrong. And by then, it's too late to fix it because half the codebase is already written in the "wrong" way.
Sep 20 2013
Szymon Gatner:http://bartoszmilewski.com/2013/09/19/edward-chands/From the blog post:Imperative languages offer no protection against data races — maybe with the exception of D.<What about Ada and Rust?Ask any C++ guru and they will tell you: avoid mutation, avoid side effects, don’t use loops, avoid class hierarchies and inheritance.<At Going Native 2013 there was a very good talk that among other things suggests to avoid raw loops in C++ code. But while this is good advice (so much that raw loops are becoming a bit of code smell for me), there are several situations where imperative loops keep being better for me. Explicit recursion is not always more readable and more easy to understand than imperative foreach loops. While most functions could and should be strongly pure, I often like mutation and imperative code inside functions. Haskell is a purely functional language, but I prefer a mix, like D, Scala, language"[1]. Bye, bearophile [1] that is as much false as saying that "Haskell is the best imperative language" as some haskellers sometimes say :-)
Sep 19 2013
On Friday, 20 September 2013 at 02:24:31 UTC, bearophile wrote:Szymon Gatner:Many people in the C family of languages tend to disregard the Pascal family. Sometimes I dream of a world where Modula-2 would have taken C's place. :) .. Paulohttp://bartoszmilewski.com/2013/09/19/edward-chands/From the blog post:Imperative languages offer no protection against data races — maybe with the exception of D.<What about Ada and Rust?
Sep 20 2013
More from the blog post:It’s a common but false belief that reference counting (using shared pointers in particular) is better than garbage collection. There is actual research showing that the two approaches are just two sides of the same coin. You should realize that deleting a shared pointer may lead to an arbitrary long pause in program execution, with similar performance characteristics as a garbage sweep.<That's an interesting paper, the symmetry between the two forms of garbage collection is a something to remember, but it doesn't contain benchmarks, you can't draw performance conclusions from that paper. Bye, bearophile
Sep 20 2013
On Friday, 20 September 2013 at 02:24:31 UTC, bearophile wrote:At Going Native 2013 there was a very good talk that among other things suggests to avoid raw loops in C++ code. But while this is good advice (so much that raw loops are becoming a bit of code smell for me), there are several situations where imperative loops keep being better for me. Explicit recursion is not always more readable and more easy to understand than imperative foreach loops.I don't think his advice is to use recursion instead of loops. I believe the point was that raw loops can usually be replaced by something more generic (e.g. std::find, std::rotate, std::transform, etc.). The loop is just an implementation detail of a more high-level algorithm. Within those algorithms, it's ok to use loops.
Sep 20 2013
On Friday, 20 September 2013 at 08:59:44 UTC, Peter Alexander wrote:On Friday, 20 September 2013 at 02:24:31 UTC, bearophile wrote:While at the university, I got to read a paper about a HPC computer architecture with NUMA, that used Lisp as their systems programming language. The functional programming approach was the only way to really take proper advantage of the hardware specific architecture. This is why it is better to use algorithms instead of explicit loops, as they can be made to scale. -- PauloAt Going Native 2013 there was a very good talk that among other things suggests to avoid raw loops in C++ code. But while this is good advice (so much that raw loops are becoming a bit of code smell for me), there are several situations where imperative loops keep being better for me. Explicit recursion is not always more readable and more easy to understand than imperative foreach loops.I don't think his advice is to use recursion instead of loops. I believe the point was that raw loops can usually be replaced by something more generic (e.g. std::find, std::rotate, std::transform, etc.). The loop is just an implementation detail of a more high-level algorithm. Within those algorithms, it's ok to use loops.
Sep 20 2013
Peter Alexander:On Friday, 20 September 2013 at 02:24:31 UTC, bearophile wrote:In my comment I have packed too much in too little space. Let me try again: In Haskell you usually don't use explicit recursion. You usually use higher order functions. But once in a while, for performance, to implement those HOFs or for other reasons you use explicit recursion. In that Going Native 2013 talk it was argued that in C++ it's usually better to use STL algorithms instead of raw loops. But sometimes you have to use raw loops, when you implement those algorithms, and in other situations. In D it's often better to use std.algorithm/range instead of raw foreach loops (despite probably unlike C++ in D a heavy use of ranges leads to a lower performance, even if you use the LDC2 compiler). So in Haskell, C++ and D it's better to use higher iteration abstractions instead of raw recursion/iteration, but once in a while you have to use it. From what I've seen so far, in those cases I prefer to use explicit foreach iteration in D instead of explicit recursion in Haskell. Bye, bearophileAt Going Native 2013 there was a very good talk that among other things suggests to avoid raw loops in C++ code. But while this is good advice (so much that raw loops are becoming a bit of code smell for me), there are several situations where imperative loops keep being better for me. Explicit recursion is not always more readable and more easy to understand than imperative foreach loops.I don't think his advice is to use recursion instead of loops. I believe the point was that raw loops can usually be replaced by something more generic (e.g. std::find, std::rotate, std::transform, etc.). The loop is just an implementation detail of a more high-level algorithm. Within those algorithms, it's ok to use loops.
Sep 20 2013
From what I've seen so far, in those cases I prefer to use explicit foreach iteration in D instead of explicit recursion in Haskell.Two other things to notice is that Haskell code seems to use too many different tiny functions and higher order functions, so it's hard for me to remember them all. The Haskell code starts to look like APL code. So using HOFs is useful, but beyond a certain number of them I prefer "raw coding". Another thing to notice is that Haskell user defined operators sometimes hurt my ability to read code. This is composed with the precedent problem. Bye, bearophile
Sep 20 2013
On Friday, 20 September 2013 at 09:43:10 UTC, bearophile wrote: [cut]Another thing to notice is that Haskell user defined operators sometimes hurt my ability to read code. This is composed with the precedent problem.++ I remember that reading a tutorial about Elm felt much more easy to read than other text based on Haskell because Elm's author took great care of choosing readable operators.. That said, he made the same mistake as Haskell's authors: currying is a *mathematical detail* which shouldn't obscure function type: 'f: a->b->c' is less readable than 'f: a,b->c'. renoX
Sep 20 2013
Am 20.09.2013 16:24, schrieb renoX:On Friday, 20 September 2013 at 09:43:10 UTC, bearophile wrote: [cut]That is standard in all languages of the ML family, not only Haskell.Another thing to notice is that Haskell user defined operators sometimes hurt my ability to read code. This is composed with the precedent problem.++ I remember that reading a tutorial about Elm felt much more easy to read than other text based on Haskell because Elm's author took great care of choosing readable operators.. That said, he made the same mistake as Haskell's authors: currying is a *mathematical detail* which shouldn't obscure function type: 'f: a->b->c' is less readable than 'f: a,b->c'. renoX
Sep 20 2013
On Friday, 20 September 2013 at 15:23:20 UTC, Paulo Pinto wrote:Am 20.09.2013 16:24, schrieb renoX:I know, but 'standard' doesn't mean that it is a good idea.. renoXThat said, he made the same mistake as Haskell's authors: currying is a *mathematical detail* which shouldn't obscure function type: 'f: a->b->c' is less readable than 'f: a,b->c'. renoXThat is standard in all languages of the ML family, not only Haskell.
Sep 23 2013
On Monday, 23 September 2013 at 12:08:48 UTC, renoX wrote:On Friday, 20 September 2013 at 15:23:20 UTC, Paulo Pinto wrote:I guess it depends on the reader, I always found it quite natural. But then again, I enjoy ML languages since I learned Caml Light. -- PauloAm 20.09.2013 16:24, schrieb renoX:I know, but 'standard' doesn't mean that it is a good idea.. renoXThat said, he made the same mistake as Haskell's authors: currying is a *mathematical detail* which shouldn't obscure function type: 'f: a->b->c' is less readable than 'f: a,b->c'. renoXThat is standard in all languages of the ML family, not only Haskell.
Sep 23 2013
On 09/23/2013 02:08 PM, renoX wrote:On Friday, 20 September 2013 at 15:23:20 UTC, Paulo Pinto wrote:Neither does 'mathematical detail' mean it is obscure, unreadable or a mistake as you seem to imply.Am 20.09.2013 16:24, schrieb renoX:I know, but 'standard' doesn't mean that it is a good idea.. renoXThat said, he made the same mistake as Haskell's authors: currying is a *mathematical detail* which shouldn't obscure function type: 'f: a->b->c' is less readable than 'f: a,b->c'. renoXThat is standard in all languages of the ML family, not only Haskell.
Sep 24 2013
On Tuesday, 24 September 2013 at 09:15:37 UTC, Timon Gehr wrote:On 09/23/2013 02:08 PM, renoX wrote:I'm not sure you understood my point: a 'normal' function takes inputS and produce an output, in the notation: a,b->c you can clearly see the inputS and the output with a minimum of 'syntax noise' around them. In the notation a -> b -> c, the 'syntax noise' is bigger (those arrows between the input parameters are much more 'heavy on the eye' than a quote), and what does it bring? Nothing.. It's the notation which makes the function type less readable which I consider a mistake. renoXOn Friday, 20 September 2013 at 15:23:20 UTC, Paulo Pinto wrote:Neither does 'mathematical detail' mean it is obscure, unreadable or a mistake as you seem to imply.Am 20.09.2013 16:24, schrieb renoX:I know, but 'standard' doesn't mean that it is a good idea.. renoXThat said, he made the same mistake as Haskell's authors: currying is a *mathematical detail* which shouldn't obscure function type: 'f: a->b->c' is less readable than 'f: a,b->c'. renoXThat is standard in all languages of the ML family, not only Haskell.
Sep 24 2013
On Tuesday, 24 September 2013 at 11:32:18 UTC, renoX wrote:On Tuesday, 24 September 2013 at 09:15:37 UTC, Timon Gehr wrote:A 'normal' function in Haskell takes exactly one object and returns exactly one object. a -> b -> c is actually a -> (b -> c) because -> is right-associative. It's perfectly readable for people in the Haskell subculture. You'll have hard time convincing them otherwise :)On 09/23/2013 02:08 PM, renoX wrote:I'm not sure you understood my point: a 'normal' function takes inputS and produce an output, in the notation: a,b->c you can clearly see the inputS and the output with a minimum of 'syntax noise' around them. In the notation a -> b -> c, the 'syntax noise' is bigger (those arrows between the input parameters are much more 'heavy on the eye' than a quote), and what does it bring? Nothing.. It's the notation which makes the function type less readable which I consider a mistake. renoXOn Friday, 20 September 2013 at 15:23:20 UTC, Paulo Pinto wrote:Neither does 'mathematical detail' mean it is obscure, unreadable or a mistake as you seem to imply.Am 20.09.2013 16:24, schrieb renoX:I know, but 'standard' doesn't mean that it is a good idea.. renoXThat said, he made the same mistake as Haskell's authors: currying is a *mathematical detail* which shouldn't obscure function type: 'f: a->b->c' is less readable than 'f: a,b->c'. renoXThat is standard in all languages of the ML family, not only Haskell.
Sep 24 2013
On Tuesday, 24 September 2013 at 12:06:22 UTC, Max Samukha wrote:A 'normal' function in Haskell takes exactly one object and returns exactly one object. a -> b -> c is actually a -> (b -> c) because -> is right-associative. It's perfectly readable for people in the Haskell subculture. You'll have hard time convincing them otherwise :)Isn't function application in Haskell left-associative? (I might be confusing terms as I am just learning it)
Sep 24 2013
On Tuesday, 24 September 2013 at 12:09:28 UTC, Szymon Gatner wrote:On Tuesday, 24 September 2013 at 12:06:22 UTC, Max Samukha wrote:So am I. Function application is left-associative. (f a b) is the equivalent of ((f a) b). Arrow in the lambda types is right-associative. If you mentally add parens, then things like (a -> b -> c) -> [a] -> [b] -> [c] look less intimidating - (a -> (b -> c)) -> ([a] -> ([b] -> [c])). Well, they don't. But at least you can make some sense of them, sometimes. They definitely don't look more intimidating than D's template instantiations.A 'normal' function in Haskell takes exactly one object and returns exactly one object. a -> b -> c is actually a -> (b -> c) because -> is right-associative. It's perfectly readable for people in the Haskell subculture. You'll have hard time convincing them otherwise :)Isn't function application in Haskell left-associative? (I might be confusing terms as I am just learning it)
Sep 24 2013
On Tuesday, 24 September 2013 at 11:32:18 UTC, renoX wrote:I'm not sure you understood my point: a 'normal' function takes inputS and produce an output, in the notation: a,b->c you can clearly see the inputS and the output with a minimum of 'syntax noise' around them. In the notation a -> b -> c, the 'syntax noise' is bigger (those arrows between the input parameters are much more 'heavy on the eye' than a quote), and what does it bring? Nothing.. It's the notation which makes the function type less readable which I consider a mistake.You are putting artificial barrier here. a -> b -> c is a function that take a as parameter and return a function that take b as parameter and return c. The concept of multiple parameters and stuff like that exists only in your mind. You try to map a concept you have in your mind in the language when it DO NOT exist in the language.
Sep 24 2013
On Tuesday, 24 September 2013 at 13:04:10 UTC, deadalnix wrote:On Tuesday, 24 September 2013 at 11:32:18 UTC, renoX wrote:A language is not something set in stone! If the design of a language requires unecessary visual noise, I dislike it, this is not as bad as Lisp, but it is still suboptimal. This is not the only 'visual noise' in Haskell: for example Idris replaced '::' by ':', a good change IMHO. renoXI'm not sure you understood my point: a 'normal' function takes inputS and produce an output, in the notation: a,b->c you can clearly see the inputS and the output with a minimum of 'syntax noise' around them. In the notation a -> b -> c, the 'syntax noise' is bigger (those arrows between the input parameters are much more 'heavy on the eye' than a quote), and what does it bring? Nothing.. It's the notation which makes the function type less readable which I consider a mistake.You are putting artificial barrier here. a -> b -> c is a function that take a as parameter and return a function that take b as parameter and return c. The concept of multiple parameters and stuff like that exists only in your mind. You try to map a concept you have in your mind in the language when it DO NOT exist in the language.
Sep 24 2013
On Tuesday, 24 September 2013 at 13:46:16 UTC, renoX wrote:On Tuesday, 24 September 2013 at 13:04:10 UTC, deadalnix wrote:I think that -> is neither unnecessary nor noise. After having played with Haskell for a while, I actually find the syntax of D unnecessarily redundant.On Tuesday, 24 September 2013 at 11:32:18 UTC, renoX wrote:A language is not something set in stone! If the design of a language requires unecessary visual noise, I dislike it, this is not as bad as Lisp, but it is still suboptimal.I'm not sure you understood my point: a 'normal' function takes inputS and produce an output, in the notation: a,b->c you can clearly see the inputS and the output with a minimum of 'syntax noise' around them. In the notation a -> b -> c, the 'syntax noise' is bigger (those arrows between the input parameters are much more 'heavy on the eye' than a quote), and what does it bring? Nothing.. It's the notation which makes the function type less readable which I consider a mistake.You are putting artificial barrier here. a -> b -> c is a function that take a as parameter and return a function that take b as parameter and return c. The concept of multiple parameters and stuff like that exists only in your mind. You try to map a concept you have in your mind in the language when it DO NOT exist in the language.This is not the only 'visual noise' in Haskell: for example Idris replaced '::' by ':', a good change IMHO.That is probably because ':' is the list append operator already.renoX
Sep 24 2013
On Tuesday, 24 September 2013 at 14:15:12 UTC, Max Samukha wrote: [cut]I think that -> is neither unnecessary nor noise. After having played with Haskell for a while, I actually find the syntax of D unnecessarily redundant.Oh, D is hardly a good example for syntax! Better than C++ doesn't say much.. That said, I don't see how one could prefer 'a -> b -> c' over 'a,b -> c' in this case..Yes, it's the other way round in Idris, but for once I prefer D's operator '~': list appending is common enough that it deserves a proper operator not a doubled one :: like in Idris or other (Scala?). renoXThis is not the only 'visual noise' in Haskell: for example Idris replaced '::' by ':', a good change IMHO.That is probably because ':' is the list append operator already.
Sep 24 2013
On Tuesday, 24 September 2013 at 14:24:48 UTC, renoX wrote:On Tuesday, 24 September 2013 at 14:15:12 UTC, Max Samukha wrote: [cut]Ok.I think that -> is neither unnecessary nor noise. After having played with Haskell for a while, I actually find the syntax of D unnecessarily redundant.Oh, D is hardly a good example for syntax! Better than C++ doesn't say much..That said, I don't see how one could prefer 'a -> b -> c' over 'a,b -> c' in this case..In case of Haskell, it is not a matter of preference. The syntax follows from the language semantics and I don't see how it can be different and better at the same time. (a, b) -> c means a function that maps a pair to an object (the same as ((,) a b) -> c). a -> b -> c, parens around b -> c omitted thanks to right associativity, means a function that maps an object to a function. How does a, b -> c fits in this picture?Sadly, I don't know Idris or Scala. Now we are talking about subjective preferences, which is an exercise in futility. :)Yes, it's the other way round in Idris, but for once I prefer D's operator '~': list appending is common enough that it deserves a proper operator not a doubled one :: like in Idris or other (Scala?).This is not the only 'visual noise' in Haskell: for example Idris replaced '::' by ':', a good change IMHO.That is probably because ':' is the list append operator already.renoX
Sep 24 2013
On 9/20/13 2:36 AM, bearophile wrote:In D it's often better to use std.algorithm/range instead of raw foreach loops (despite probably unlike C++ in D a heavy use of ranges leads to a lower performance, even if you use the LDC2 compiler).Why would there be a performance loss? Andrei
Sep 20 2013
On 20/09/13 17:28, Andrei Alexandrescu wrote:Why would there be a performance loss?Depends on the particular case, but my experience is that _in practice_ stuff based around range interfaces can often be slower than raw iteration. I don't think anyone's saying a performance loss is inevitable or unavoidable, but there currently is one.
Sep 20 2013
On 9/20/2013 8:54 AM, Joseph Rushton Wakeling wrote:On 20/09/13 17:28, Andrei Alexandrescu wrote:I know that, at least with dmd's back end, it's because the optimizer was built around the kinds of things that C++ programmers tend to write. The D range/algorithm generates unusual code (for the back end) that the back end doesn't optimize for. For example, ranges tend to lump several variables together and call them a 'struct'. The back end is not tuned to deal with structs as an aggregate of discreet 'variables', meaning that such variables don't get assigned to registers. Structs are treated as a lump. This is not a fundamental performance problem with D. The back end needs to be improved to "dis-aggregate" structs back into discreet variables.Why would there be a performance loss?Depends on the particular case, but my experience is that _in practice_ stuff based around range interfaces can often be slower than raw iteration. I don't think anyone's saying a performance loss is inevitable or unavoidable, but there currently is one.
Sep 20 2013
On 20/09/13 18:01, Walter Bright wrote:I know that, at least with dmd's back end, it's because the optimizer was built around the kinds of things that C++ programmers tend to write. The D range/algorithm generates unusual code (for the back end) that the back end doesn't optimize for. For example, ranges tend to lump several variables together and call them a 'struct'. The back end is not tuned to deal with structs as an aggregate of discreet 'variables', meaning that such variables don't get assigned to registers. Structs are treated as a lump. This is not a fundamental performance problem with D. The back end needs to be improved to "dis-aggregate" structs back into discreet variables.I wouldn't single out DMD for criticism -- I don't know to what extent the underlying reasons overlap, but all the compilers cope less well with range-based material than they might in theory. The canonical example would be something like, foreach (i; iota(10)) { ... } which in theory shouldn't be any slower than, foreach (i; 0 .. 10) { ... } but in practice is, no matter what the compiler.
Sep 20 2013
On 9/20/13 9:28 AM, Joseph Rushton Wakeling wrote:On 20/09/13 18:01, Walter Bright wrote:I think I know how to fix that. I hypothesize it's about using actual increment instead of a stored value "step" for the particular case when step is 1. AndreiI know that, at least with dmd's back end, it's because the optimizer was built around the kinds of things that C++ programmers tend to write. The D range/algorithm generates unusual code (for the back end) that the back end doesn't optimize for. For example, ranges tend to lump several variables together and call them a 'struct'. The back end is not tuned to deal with structs as an aggregate of discreet 'variables', meaning that such variables don't get assigned to registers. Structs are treated as a lump. This is not a fundamental performance problem with D. The back end needs to be improved to "dis-aggregate" structs back into discreet variables.I wouldn't single out DMD for criticism -- I don't know to what extent the underlying reasons overlap, but all the compilers cope less well with range-based material than they might in theory. The canonical example would be something like, foreach (i; iota(10)) { ... } which in theory shouldn't be any slower than, foreach (i; 0 .. 10) { ... } but in practice is, no matter what the compiler.
Sep 20 2013
On 20/09/13 19:41, Andrei Alexandrescu wrote:On 9/20/13 9:28 AM, Joseph Rushton Wakeling wrote:Excellent, that'll be great to see :-)The canonical example would be something like, foreach (i; iota(10)) { ... } which in theory shouldn't be any slower than, foreach (i; 0 .. 10) { ... } but in practice is, no matter what the compiler.I think I know how to fix that. I hypothesize it's about using actual increment instead of a stored value "step" for the particular case when step is 1.
Sep 20 2013
On 9/20/13 11:21 AM, Joseph Rushton Wakeling wrote:On 20/09/13 19:41, Andrei Alexandrescu wrote:http://d.puremagic.com/issues/show_bug.cgi?id=11077 AndreiOn 9/20/13 9:28 AM, Joseph Rushton Wakeling wrote:Excellent, that'll be great to see :-)The canonical example would be something like, foreach (i; iota(10)) { ... } which in theory shouldn't be any slower than, foreach (i; 0 .. 10) { ... } but in practice is, no matter what the compiler.I think I know how to fix that. I hypothesize it's about using actual increment instead of a stored value "step" for the particular case when step is 1.
Sep 20 2013
On Friday, 20 September 2013 at 16:28:49 UTC, Joseph Rushton Wakeling wrote:I wouldn't single out DMD for criticism -- I don't know to what extent the underlying reasons overlap, but all the compilers cope less well with range-based material than they might in theory. The canonical example would be something like, foreach (i; iota(10)) { ... } which in theory shouldn't be any slower than, foreach (i; 0 .. 10) { ... } but in practice is, no matter what the compiler.Chandler made a talk about missed optimization opportunities in LLVM some time ago visible here http://llvm.org/devmtg/2013-04/videos/carruth-hires.mov You may also want to look at that one for some basics before : http://www.youtube.com/watch?v=eR34r7HOU14 What happen with range is that they do not fit that model properly. Range code often look like a tree (front calling subrange.front, popFront call subrange.popFront, etc . . .). At each inlining, front/popFront and firend become bigger and bigger to the point they aren't good inlining candidate. The compiler do not see that everything cancels out at the end. The other big performance pitfall in LLVM with range are bug in the optimizer when it comes to aggregate store/load. I reported some of these and LLVM team seems willing to get rid of them because a lot of frontend generate them (notably gcc frontend). clang do not use them, so they got less attention in the first place.
Sep 20 2013
On Friday, 20 September 2013 at 16:28:49 UTC, Joseph Rushton Wakeling wrote: [...]The canonical example would be something like, foreach (i; iota(10)) { ... } which in theory shouldn't be any slower than, foreach (i; 0 .. 10) { ... } but in practice is, no matter what the compiler.Some compilers are unexpectedly smart in this case. I just benchmarked somewhat similar code with two version of the same function: one using a straight for loop, the other mixture of iota and "..". Plain for loop should be the fastest, right? Almost right. Here are the functions: int e28_0(int N = 1002) { int diagNumber = 1; int sum = diagNumber; for (int width = 2; width < N; width += 2) for (int j = 0; j < 4; ++j) { diagNumber += width; sum += diagNumber; } return sum; } int e28_1(int N = 1002) { int diagNumber = 1; int sum = diagNumber; foreach (width; iota(2, N, 2)) foreach (_; 0..4) { diagNumber += width; sum += diagNumber; } return sum; } Here are the results: GDC 4.8.1: gdc-4.8 -m64 -march=native -fno-bounds-check -frename-registers -frelease -O3 669171001 830ns e28_0 669171001 830ns e28_1 DMD64 2.063.2: dmd -O -noboundscheck -inline -release 669171001 1115ns e28_0 669171001 1958ns e28_1 LDC 0.11.0: ldmd2 -m64 -O -noboundscheck -inline -release 669171001 454ns e28_0 669171001 395ns e28_1
Sep 21 2013
On 9/20/13 8:54 AM, Joseph Rushton Wakeling wrote:On 20/09/13 17:28, Andrei Alexandrescu wrote:With dmd that's indeed the case. But LLVM is good at enregistering small structs, which is how ranges are implemented. AndreiWhy would there be a performance loss?Depends on the particular case, but my experience is that _in practice_ stuff based around range interfaces can often be slower than raw iteration. I don't think anyone's saying a performance loss is inevitable or unavoidable, but there currently is one.
Sep 20 2013
On 20.09.2013 17:28, Andrei Alexandrescu wrote:On 9/20/13 2:36 AM, bearophile wrote:Because range concept underutilize current CPUs. I don't know if all empty/front/popFront are inlined. I suppose some templated functions are, but surely not all of them. What will be faster, calling all of those functions for each element or traversing memory using a loop? Of course 2nd option will be faster, because the act of traversing (cached) memory does not have the overhead of a function call. Some time ago I proposed a hybrid range concept where the front() is an array. All algorithms would first traverse the memory in this array, then call popFront to get another array (slice). In this way, function call overhead is minimized because it's paid sparsely rather than for each range element.In D it's often better to use std.algorithm/range instead of raw foreach loops (despite probably unlike C++ in D a heavy use of ranges leads to a lower performance, even if you use the LDC2 compiler).Why would there be a performance loss?
Sep 20 2013
Piotr Szturmaj:Some time ago I proposed a hybrid range concept where the front() is an array. All algorithms would first traverse the memory in this array, then call popFront to get another array (slice). In this way, function call overhead is minimized because it's paid sparsely rather than for each range element.There are papers on this topic, like: http://lafstern.org/matt/segmented.pdf Bye, bearophile
Sep 20 2013
On 9/20/13 9:51 AM, Piotr Szturmaj wrote:On 20.09.2013 17:28, Andrei Alexandrescu wrote:This is just a supposition. Inlining is automatic and just works subject to the constraints chosen by the implementers. It's not like there's a guy sitting there and getting bored of inlining.On 9/20/13 2:36 AM, bearophile wrote:Because range concept underutilize current CPUs. I don't know if all empty/front/popFront are inlined. I suppose some templated functions are, but surely not all of them. What will be faster, calling all of those functions for each element or traversing memory using a loop?In D it's often better to use std.algorithm/range instead of raw foreach loops (despite probably unlike C++ in D a heavy use of ranges leads to a lower performance, even if you use the LDC2 compiler).Why would there be a performance loss?Of course 2nd option will be faster, because the act of traversing (cached) memory does not have the overhead of a function call.No, no.Some time ago I proposed a hybrid range concept where the front() is an array. All algorithms would first traverse the memory in this array, then call popFront to get another array (slice). In this way, function call overhead is minimized because it's paid sparsely rather than for each range element.That's a good idea but for completely different reasons. Andrei
Sep 20 2013
On 20.09.2013 19:45, Andrei Alexandrescu wrote:On 9/20/13 9:51 AM, Piotr Szturmaj wrote:This was not my point. I know that inlining is automatic, but certainly not all functions are automatically inlined and all of those non-inlined carry some overhead.On 20.09.2013 17:28, Andrei Alexandrescu wrote:This is just a supposition. Inlining is automatic and just works subject to the constraints chosen by the implementers. It's not like there's a guy sitting there and getting bored of inlining.On 9/20/13 2:36 AM, bearophile wrote:Because range concept underutilize current CPUs. I don't know if all empty/front/popFront are inlined. I suppose some templated functions are, but surely not all of them. What will be faster, calling all of those functions for each element or traversing memory using a loop?In D it's often better to use std.algorithm/range instead of raw foreach loops (despite probably unlike C++ in D a heavy use of ranges leads to a lower performance, even if you use the LDC2 compiler).Why would there be a performance loss?Could you elaborate? :)Of course 2nd option will be faster, because the act of traversing (cached) memory does not have the overhead of a function call.No, no.Some time ago I proposed a hybrid range concept where the front() is an array. All algorithms would first traverse the memory in this array, then call popFront to get another array (slice). In this way, function call overhead is minimized because it's paid sparsely rather than for each range element.That's a good idea but for completely different reasons.
Sep 20 2013
On 9/20/2013 10:45 AM, Andrei Alexandrescu wrote:It's not like there's a guy sitting there and getting bored of inlining.The nice thing about modern optimizers is they are indefatigable. They don't get bored, go on strike, come to work high, :-)
Sep 20 2013
On Friday, 20 September 2013 at 08:59:44 UTC, Peter Alexander wrote:On Friday, 20 September 2013 at 02:24:31 UTC, bearophile wrote:If only std algorithms took containers (by that I mean things that container for accepts too) as arguments (and not iterators)... even in the form of new functions like foreach_all, transform_all etc.At Going Native 2013 there was a very good talk that among other things suggests to avoid raw loops in C++ code. But while this is good advice (so much that raw loops are becoming a bit of code smell for me), there are several situations where imperative loops keep being better for me. Explicit recursion is not always more readable and more easy to understand than imperative foreach loops.I don't think his advice is to use recursion instead of loops. I believe the point was that raw loops can usually be replaced by something more generic (e.g. std::find, std::rotate, std::transform, etc.). The loop is just an implementation detail of a more high-level algorithm. Within those algorithms, it's ok to use loops.
Sep 20 2013
On 2013-09-20 11:37, Szymon Gatner wrote:If only std algorithms took containers (by that I mean things that container for accepts too) as arguments (and not iterators)... even in the form of new functions like foreach_all, transform_all etc.Can't a container be a range as well? -- /Jacob Carlborg
Sep 20 2013
On Friday, 20 September 2013 at 10:00:32 UTC, Jacob Carlborg wrote:On 2013-09-20 11:37, Szymon Gatner wrote:Not sure what you mean but C++ has no concept of a range. Apparent there is now a study group but atm standard algos are crippled compared to "container for".If only std algorithms took containers (by that I mean things that container for accepts too) as arguments (and not iterators)... even in the form of new functions like foreach_all, transform_all etc.Can't a container be a range as well?
Sep 20 2013
On Friday, 20 September 2013 at 10:02:58 UTC, Szymon Gatner wrote:On Friday, 20 September 2013 at 10:00:32 UTC, Jacob Carlborg wrote:To clarify what I mean by crippled: standard algorithms were suppose to make code more expressive and one of the ways they would do it was to eliminate tedious iterator-based for loops: instead of for (typename Contaniner<T>::const_iterator it = c.begin(), it != c.end() ++it) {..} you could do: std::for_each(c.begin(), c.end(), [](T&) {...};) (of course this all only makes any sense with lambdas, without them it is really hard to argue about expressivenes in the first place) but now we can do this: for (auto& v : c) {} which is even nicer than the for_each version. Algorithms do more than iterating of course, but it is a shame they didn't evolve as basic constructs.On 2013-09-20 11:37, Szymon Gatner wrote:Not sure what you mean but C++ has no concept of a range. Apparent there is now a study group but atm standard algos are crippled compared to "container for".If only std algorithms took containers (by that I mean things that container for accepts too) as arguments (and not iterators)... even in the form of new functions like foreach_all, transform_all etc.Can't a container be a range as well?
Sep 20 2013
On Friday, 20 September 2013 at 10:02:58 UTC, Szymon Gatner wrote:On Friday, 20 September 2013 at 10:00:32 UTC, Jacob Carlborg wrote:To clarify what I mean by crippled: standard algorithms were suppose to make code more expressive and one of the ways they would do it was to eliminate tedious iterator-based for loops: instead of for (typename Contaniner<T>::const_iterator it = c.begin(), it != c.end() ++it) {..} you could do: std::for_each(c.begin(), c.end(), [](T&) {...};) (of course this all only makes any sense with lambdas, without them it is really hard to argue about expressivenes in the first place) but now we can do this: for (auto& v : c) {} which is even nicer than the for_each version. Algorithms do more than iterating of course, but it is a shame they didn't evolve as basic constructs.On 2013-09-20 11:37, Szymon Gatner wrote:Not sure what you mean but C++ has no concept of a range. Apparent there is now a study group but atm standard algos are crippled compared to "container for".If only std algorithms took containers (by that I mean things that container for accepts too) as arguments (and not iterators)... even in the form of new functions like foreach_all, transform_all etc.Can't a container be a range as well?
Sep 20 2013
On 2013-09-20 12:02, Szymon Gatner wrote:Not sure what you mean but C++ has no concept of a range. Apparent there is now a study group but atm standard algos are crippled compared to "container for".Oh, I was thinking about D. Sorry for the confusion. -- /Jacob Carlborg
Sep 20 2013
20-Sep-2013 14:00, Jacob Carlborg пишет:On 2013-09-20 11:37, Szymon Gatner wrote:For Christ sake no, no and no. For starters range skips/drops elements when iterating, and thusly iteration has its own state that is useless for a container otherwise. The idea of "contange" spreads like virus, no matter how abominable the end result is. The fact that slices sometimes look like containers (BTW ill-suited for anything beyond primitives/PODish stuff ) must be the corner stone of this belief. Strengthened on the motto of trying to make user defined types to mimic behavior of built-ins it leads to a school of thought that it's fine to blend ownership and access. TL;DR: Suboptimal, unnatural and error prone are keywords. -- Dmitry OlshanskyIf only std algorithms took containers (by that I mean things that container for accepts too) as arguments (and not iterators)... even in the form of new functions like foreach_all, transform_all etc.Can't a container be a range as well?
Sep 20 2013
On Friday, 20 September 2013 at 10:47:52 UTC, Dmitry Olshansky wrote:20-Sep-2013 14:00, Jacob Carlborg пишет:That would be a filtering range which adds additional logic that costs exactly the same as an if() statement inside a for loop when filtering on condition manually.On 2013-09-20 11:37, Szymon Gatner wrote:For Christ sake no, no and no. For starters range skips/drops elements when iterating, and thusly iteration has its own state that is useless for a container otherwise.If only std algorithms took containers (by that I mean things that container for accepts too) as arguments (and not iterators)... even in the form of new functions like foreach_all, transform_all etc.Can't a container be a range as well?TL;DR: Suboptimal, unnatural and error prone are keywords.Why would it be suboptimal?
Sep 20 2013
20-Sep-2013 14:55, Szymon Gatner пишет:On Friday, 20 September 2013 at 10:47:52 UTC, Dmitry Olshansky wrote:Filtering is just an adapter. Better examples are traversing e.g. binary-tree depth-first in-order, or post-order and that would require state. Again it's easy to miss by looking at built-in arrays.20-Sep-2013 14:00, Jacob Carlborg пишет:That would be a filtering range which adds additional logic that costs exactly the same as an if() statement inside a for loop when filtering on condition manually.On 2013-09-20 11:37, Szymon Gatner wrote:For Christ sake no, no and no. For starters range skips/drops elements when iterating, and thusly iteration has its own state that is useless for a container otherwise.If only std algorithms took containers (by that I mean things that container for accepts too) as arguments (and not iterators)... even in the form of new functions like foreach_all, transform_all etc.Can't a container be a range as well?If said tree needs to be a range I would be horrified to see how it manges to be at the same time a range that (for the sake of example) traverses said tree depth-first in-order. Not even talking about trees having no one "natural" range over them. -- Dmitry OlshanskyTL;DR: Suboptimal, unnatural and error prone are keywords.Why would it be suboptimal?
Sep 20 2013
On Fri, Sep 20, 2013 at 05:36:59PM +0400, Dmitry Olshansky wrote:20-Sep-2013 14:55, Szymon Gatner пишет:A container should not be confused with a range. That way leads to dragons. :-P (It's rather unfortunate that built-in arrays conflate the two, it leads to a lot of wrong code that works only with arrays but not with "real" ranges.)On Friday, 20 September 2013 at 10:47:52 UTC, Dmitry Olshansky wrote:20-Sep-2013 14:00, Jacob Carlborg пишет:On 2013-09-20 11:37, Szymon Gatner wrote:If only std algorithms took containers (by that I mean things that container for accepts too) as arguments (and not iterators)... even in the form of new functions like foreach_all, transform_all etc.Can't a container be a range as well?Traversing the tree at all requires state, whether or not you do it with ranges. You either put the state on the runtime stack (recursion), or you put the state on the heap in the range adaptor. Same thing.Filtering is just an adapter. Better examples are traversing e.g. binary-tree depth-first in-order, or post-order and that would require state. Again it's easy to miss by looking at built-in arrays.For Christ sake no, no and no. For starters range skips/drops elements when iterating, and thusly iteration has its own state that is useless for a container otherwise.That would be a filtering range which adds additional logic that costs exactly the same as an if() statement inside a for loop when filtering on condition manually.[...] Some trees do, like document trees. But yeah, in general, you don't have a natural ordering to trees. But then again, the whole point of ranges is *linear* traversal. If you're not doing that, then it's probably the wrong abstraction to use. (FWIW, while SortedRange is a neat hack, I haven't actually found it that useful in practice. Most of the ranges I actually use are for the cases where it's an input/forward range and you don't want to store the whole thing, so random access is impractical. The cases where I actually have random access usually turn out to be plain ole arrays anyway, so the SortedRange abstraction isn't really necessary.) So for trees, I'd argue that you need a tree-specific iteration abstraction, not ranges, which are linear. It's a clear case of structure conflict. ;-) T -- Real Programmers use "cat > a.out".If said tree needs to be a range I would be horrified to see how it manges to be at the same time a range that (for the sake of example) traverses said tree depth-first in-order. Not even talking about trees having no one "natural" range over them.TL;DR: Suboptimal, unnatural and error prone are keywords.Why would it be suboptimal?
Sep 20 2013
On Friday, 20 September 2013 at 14:49:21 UTC, H. S. Teoh wrote: snip[...] Some trees do, like document trees. But yeah, in general, you don't have a natural ordering to trees. But then again, the whole point of ranges is *linear* traversal. If you're not doing that, then it's probably the wrong abstraction to use. (FWIW, while SortedRange is a neat hack, I haven't actually found it that useful in practice. Most of the ranges I actually use are for the cases where it's an input/forward range and you don't want to store the whole thing, so random access is impractical. The cases where I actually have random access usually turn out to be plain ole arrays anyway, so the SortedRange abstraction isn't really necessary.) So for trees, I'd argue that you need a tree-specific iteration abstraction, not ranges, which are linear. It's a clear case of structure conflict. ;-) TExcuse my lack of knowledge on Ranges, so maybe what I am proposing is infeasible, for a binary tree for example couldn't you have a InOrderRange, PreOrderRange, and PostOrderRange or alternately DepthFirstRange, and BreadFirstRange. From a consumers view those would be linear operations? Craig
Sep 20 2013
On Friday, 20 September 2013 at 14:55:41 UTC, Craig Dillabaugh wrote: clipExcuse my lack of knowledge on Ranges, so maybe what I am proposing is infeasible, for a binary tree for example couldn't you have a InOrderRange, PreOrderRange, and PostOrderRange or alternately DepthFirstRange, and BreadFirstRange. From a consumers view those would be linear operations? CraigPlease also excuse my lack of proper punctuation :o)
Sep 20 2013
On Fri, Sep 20, 2013 at 04:55:40PM +0200, Craig Dillabaugh wrote:On Friday, 20 September 2013 at 14:49:21 UTC, H. S. Teoh wrote: snip[...] Well, of course you can have these ranges. But they have to be separate from the container. And they won't be able to take advantage of the tree structure, for example, prune the children of the current node from the iteration. To do that with a range you'd have to skip over all the descendent nodes manually, which is tedious and also a waste of time. Incidentally, I did recently write a PreorderRange API for my own code, which provides an additional method called pruneFront() that efficiently skips over descendent nodes. In retrospect, though, I'm questioning the value of doing this, since that makes assumptions about the range that require too much knowledge about the underlying container, which makes the abstraction nothing more than mere formality. I might as well come out and say "this is a tree" and work with a full-fledged tree abstraction rather than try to shoehorn it into a linear range API. T -- Why is it that all of the instruments seeking intelligent life in the universe are pointed away from Earth? -- Michael Beibl[...] Some trees do, like document trees. But yeah, in general, you don't have a natural ordering to trees. But then again, the whole point of ranges is *linear* traversal. If you're not doing that, then it's probably the wrong abstraction to use. (FWIW, while SortedRange is a neat hack, I haven't actually found it that useful in practice. Most of the ranges I actually use are for the cases where it's an input/forward range and you don't want to store the whole thing, so random access is impractical. The cases where I actually have random access usually turn out to be plain ole arrays anyway, so the SortedRange abstraction isn't really necessary.) So for trees, I'd argue that you need a tree-specific iteration abstraction, not ranges, which are linear. It's a clear case of structure conflict. ;-) TExcuse my lack of knowledge on Ranges, so maybe what I am proposing is infeasible, for a binary tree for example couldn't you have a InOrderRange, PreOrderRange, and PostOrderRange or alternately DepthFirstRange, and BreadFirstRange. From a consumers view those would be linear operations?
Sep 20 2013
On 9/20/13 7:55 AM, Craig Dillabaugh wrote:Excuse my lack of knowledge on Ranges, so maybe what I am proposing is infeasible, for a binary tree for example couldn't you have a InOrderRange, PreOrderRange, and PostOrderRange or alternately DepthFirstRange, and BreadFirstRange. From a consumers view those would be linear operations?The tree object would have functions returning by value each of these range types. Andrei
Sep 20 2013
20-Sep-2013 18:48, H. S. Teoh пишет:On Fri, Sep 20, 2013 at 05:36:59PM +0400, Dmitry Olshansky wrote:20-Sep-2013 14:55, Szymon Gatner пишет:A container should not be confused with a range. That way leads to dragons. :-P (It's rather unfortunate that built-in arrays conflate the two, it leads to a lot of wrong code that works only with arrays but not with "real" ranges.)On Friday, 20 September 2013 at 10:47:52 UTC, Dmitry Olshansky wrote:20-Sep-2013 14:00, Jacob Carlborg пишет:On 2013-09-20 11:37, Szymon Gatner wrote:If only std algorithms took containers (by that I mean things that container for accepts too) as arguments (and not iterators)... even in the form of new functions like foreach_all, transform_all etc.Can't a container be a range as well?Traversing the tree at all requires state, whether or not you do it with ranges. You either put the state on the runtime stack (recursion), or you put the state on the heap in the range adaptor. Same thing.Filtering is just an adapter. Better examples are traversing e.g. binary-tree depth-first in-order, or post-order and that would require state. Again it's easy to miss by looking at built-in arrays.For Christ sake no, no and no. For starters range skips/drops elements when iterating, and thusly iteration has its own state that is useless for a container otherwise.That would be a filtering range which adds additional logic that costs exactly the same as an if() statement inside a for loop when filtering on condition manually.[...] Some trees do, like document trees. But yeah, in general, you don't have a natural ordering to trees.If said tree needs to be a range I would be horrified to see how it manges to be at the same time a range that (for the sake of example) traverses said tree depth-first in-order. Not even talking about trees having no one "natural" range over them.TL;DR: Suboptimal, unnatural and error prone are keywords.Why would it be suboptimal?But then again, the whole point of ranges is *linear* traversal. If you're not doing that, then it's probably the wrong abstraction to use.Then traversing all files in directory in specific order is not interesting to you? I hardly find that logical. Abstraction is wrong only where it doesn't make sense and here it does. There is something linear about tree and that is every sane path taken through it is linear (after all there are no cycles). BTW I see nothing problematic in seeking a minimal set of primitives that would allow working on tree-like topology. It won't be the same range but never the less useful.(FWIW, while SortedRange is a neat hack, I haven't actually found it that useful in practice. Most of the ranges I actually use are for the cases where it's an input/forward range and you don't want to store the whole thing, so random access is impractical.Sorted forward range is actually fine. You may do interesting things with them like O(N+M) merging, subtracting and whatnot. Set operations for the win (they are not defined in Phobos yet).The cases where I actually have random access usually turn out to be plain ole arrays anyway, so the SortedRange abstraction isn't really necessary.) So for trees, I'd argue that you need a tree-specific iteration abstraction, not ranges, which are linear. It's a clear case of structure conflict. ;-) T-- Dmitry Olshansky
Sep 20 2013
On 20/09/13 16:48, H. S. Teoh wrote:A container should not be confused with a range. That way leads to dragons. :-P (It's rather unfortunate that built-in arrays conflate the two, it leads to a lot of wrong code that works only with arrays but not with "real" ranges.)Built-in arrays are not _always_ ranges. Consider const(int[]) ... as I found out recently, it's _not_ a range, because you can't popFront on a const entity.
Sep 20 2013
On 9/20/13 8:13 AM, Joseph Rushton Wakeling wrote:On 20/09/13 16:48, H. S. Teoh wrote:Well you can't call popFront on any const range. After some time I learned to make peace with the dual nature of built-in slices. Part of that was the failed experiment to introduce the type "new T[]". Another part is the simple realization that built-in slices are not "regular" ranges, they have deeper connections in D's object model, so they're allowed to be a little special. AndreiA container should not be confused with a range. That way leads to dragons. :-P (It's rather unfortunate that built-in arrays conflate the two, it leads to a lot of wrong code that works only with arrays but not with "real" ranges.)Built-in arrays are not _always_ ranges. Consider const(int[]) ... as I found out recently, it's _not_ a range, because you can't popFront on a const entity.
Sep 20 2013
On Friday, 20 September 2013 at 16:38:29 UTC, Andrei Alexandrescu wrote:On 9/20/13 8:13 AM, Joseph Rushton Wakeling wrote:What is even the purpose of const ranges? It makes little sense imho. Why would GoF iterator be immutable? It is like trying to iterate over collection bu indexing but only having const int as index variable. If anything it only make sense to have ranges to const (abstraction over const_iterator).On 20/09/13 16:48, H. S. Teoh wrote:Well you can't call popFront on any const range.A container should not be confused with a range. That way leads to dragons. :-P (It's rather unfortunate that built-in arrays conflate the two, it leads to a lot of wrong code that works only with arrays but not with "real" ranges.)Built-in arrays are not _always_ ranges. Consider const(int[]) ... as I found out recently, it's _not_ a range, because you can't popFront on a const entity.
Sep 20 2013
On Friday, September 20, 2013 18:46:34 Szymon Gatner wrote:On Friday, 20 September 2013 at 16:38:29 UTC, Andrei Alexandrescu wrote:If an object is const, then all of its members are const, which means that any ranges you get from its members will be const, making such ranges useless. But it still makes sense to iterate over such a range (in most cases, you care about the elements in the range, not the range itself). But since you can't iterate over the range (because it's const), you need to get a tail-const slice of it to iterate over instead. If you can do that, then const ranges aren't a huge problem. The problem is that making it so that you can get a tail-const slice of a const user-defined range is incredibly difficult. The compiler understands arrays, so it works there, but it doesn't work anywhere else. So, at this point, const and ranges don't play nicely at all. - Jonathan M DavisOn 9/20/13 8:13 AM, Joseph Rushton Wakeling wrote:What is even the purpose of const ranges? It makes little sense imho. Why would GoF iterator be immutable? It is like trying to iterate over collection bu indexing but only having const int as index variable. If anything it only make sense to have ranges to const (abstraction over const_iterator).On 20/09/13 16:48, H. S. Teoh wrote:Well you can't call popFront on any const range.A container should not be confused with a range. That way leads to dragons. :-P (It's rather unfortunate that built-in arrays conflate the two, it leads to a lot of wrong code that works only with arrays but not with "real" ranges.)Built-in arrays are not _always_ ranges. Consider const(int[]) ... as I found out recently, it's _not_ a range, because you can't popFront on a const entity.
Sep 20 2013
On Friday, 20 September 2013 at 16:57:43 UTC, Jonathan M Davis wrote:If an object is const, then all of its members are const, which means that any ranges you get from its members will be const, making such ranges useless.That is so weird to hear considering I added ranges to my C++ code and my Vector<T>::all() const can easily return non-const range even tho container is itself const. This kinda looks like D is more limited in that area than C++... Or I really am not getting something.
Sep 20 2013
On 9/20/13 10:02 AM, Szymon Gatner wrote:On Friday, 20 September 2013 at 16:57:43 UTC, Jonathan M Davis wrote:Yah, it should be possible for a const container to offer ranges over its (non-modifiable) elements. AndreiIf an object is const, then all of its members are const, which means that any ranges you get from its members will be const, making such ranges useless.That is so weird to hear considering I added ranges to my C++ code and my Vector<T>::all() const can easily return non-const range even tho container is itself const. This kinda looks like D is more limited in that area than C++... Or I really am not getting something.
Sep 20 2013
On Friday, September 20, 2013 10:47:15 Andrei Alexandrescu wrote:On 9/20/13 10:02 AM, Szymon Gatner wrote:That's probably easy to do if the container is well-written, because the container creates the range. The problem is converting a const range to tail const one, which is what you have to do if you ever end up with a const range for any reason, and if you're using const much, that's trivial to do (especially if you ever have a range as a member variable). Unfortunately, in practice, that conversion is quite difficult to do with user-defined types even though it works just fine with arrays. - Jonathan M DavisOn Friday, 20 September 2013 at 16:57:43 UTC, Jonathan M Davis wrote:Yah, it should be possible for a const container to offer ranges over its (non-modifiable) elements.If an object is const, then all of its members are const, which means that any ranges you get from its members will be const, making such ranges useless.That is so weird to hear considering I added ranges to my C++ code and my Vector<T>::all() const can easily return non-const range even tho container is itself const. This kinda looks like D is more limited in that area than C++... Or I really am not getting something.
Sep 20 2013
On 9/20/13 10:52 AM, Jonathan M Davis wrote:On Friday, September 20, 2013 10:47:15 Andrei Alexandrescu wrote:I'm not sure such a conversion is needed all that often. AndreiOn 9/20/13 10:02 AM, Szymon Gatner wrote:That's probably easy to do if the container is well-written, because the container creates the range. The problem is converting a const range to tail const one, which is what you have to do if you ever end up with a const range for any reason, and if you're using const much, that's trivial to do (especially if you ever have a range as a member variable). Unfortunately, in practice, that conversion is quite difficult to do with user-defined types even though it works just fine with arrays.On Friday, 20 September 2013 at 16:57:43 UTC, Jonathan M Davis wrote:Yah, it should be possible for a const container to offer ranges over its (non-modifiable) elements.If an object is const, then all of its members are const, which means that any ranges you get from its members will be const, making such ranges useless.That is so weird to hear considering I added ranges to my C++ code and my Vector<T>::all() const can easily return non-const range even tho container is itself const. This kinda looks like D is more limited in that area than C++... Or I really am not getting something.
Sep 20 2013
On Friday, September 20, 2013 12:29:33 Andrei Alexandrescu wrote:On 9/20/13 10:52 AM, Jonathan M Davis wrote:Every time that there's a member variable which is a range, you end up needing it if you ever need to iterate over the members of that range when the object itself is const - the prime example being when the range is returned from a getter or property function. I think that you tend to be able to avoid it if you're not doing much OO and your code is very functional in nature, but if you've got many objects with data, then you start running into it (probably forcing you to move the contents of the range into an array in many cases). Also, there are plenty of folks who tend to try and use const and/or immutable as much as possible in their code, and when you do that, ranges become useless pretty quickly, whereas if you could easily get a tail-const version of the range when slicing it, ranges can then still be useful in code that uses const and immutable heavily. As it stands, you pretty much have to avoid const or immutable when doing much with ranges thanks to the lack of tail-const conversion for user-defined ranges, and while that certainly isn't a fatal restriction, it's certainly frustrating when const and immutable are supposed to be major advantages of D. I suspect that the main reason that we don't get more complaints about it is because arrays don't have the problem, but the problem does still come up from time to time. - Jonathan M DavisOn Friday, September 20, 2013 10:47:15 Andrei Alexandrescu wrote:I'm not sure such a conversion is needed all that often.On 9/20/13 10:02 AM, Szymon Gatner wrote:That's probably easy to do if the container is well-written, because the container creates the range. The problem is converting a const range to tail const one, which is what you have to do if you ever end up with a const range for any reason, and if you're using const much, that's trivial to do (especially if you ever have a range as a member variable). Unfortunately, in practice, that conversion is quite difficult to do with user-defined types even though it works just fine with arrays.On Friday, 20 September 2013 at 16:57:43 UTC, Jonathan M Davis wrote:Yah, it should be possible for a const container to offer ranges over its (non-modifiable) elements.If an object is const, then all of its members are const, which means that any ranges you get from its members will be const, making such ranges useless.That is so weird to hear considering I added ranges to my C++ code and my Vector<T>::all() const can easily return non-const range even tho container is itself const. This kinda looks like D is more limited in that area than C++... Or I really am not getting something.
Sep 20 2013
On Friday, 20 September 2013 at 19:29:33 UTC, Andrei Alexandrescu wrote:I'm not sure such a conversion is needed all that often.That has been a show stopper for several things I wanted to implement in SDC. We need to be able to do that.
Sep 20 2013
On Friday, September 20, 2013 19:02:22 Szymon Gatner wrote:On Friday, 20 September 2013 at 16:57:43 UTC, Jonathan M Davis wrote:The container isn't the problem. It's converting a const range to a tail-const one (e.g. const(Range!int) to Range!(const int). In D, const is transitive, whereas in C++, const only affects the top level. So, if you have class C { auto foo() const { return i; } private int* i; } then foo is going to return a const(int*). Thanks to the fact that const int* is implicitly convertible to const(int)*, you could also return const(int)*, if you chose, but you could never return int*, whereas C++ would let you do that just fine, because in C++ it's only the member variable itself which is considered const, not the whole thing, whereas in D, once an object is const, it and everything it refers to is const when accessed through it. So, in D, if you have a member function that returns a range, the only way that it can return a non-const range is if it's able to convert the fully const range to a tail-const one (i.e. the range itself is non-const but what it refers to is still const). That can easily be done by creating an entirely new range from the original (e.g. convert it to an array and return that), but converting a const range to a tail-const one is very difficult to do with user- defined types. To do that, we need to be able to convert const(Range!Foo) to Range!(const Foo), and thanks to how templates work those are completely different template instantations (meaning that technically, they don't necessarily have anything do with each other - e.g. their internal layout could be completely different), so the compiler can't assume that that conversion will work. You have to write it yourself, which quickly runs into problems with recursive template instantiations and the like. I believe that it can be done with the proper use of static if, but it does mean that you have to know what you're doing. It's not straightforward, and the last time that I attempted it, I failed. This is in stark contrast to arrays, where the compiler knows fulwell that it can convert const(int[]) to const(int)[] without causing any problems. - Jonathan M DavisIf an object is const, then all of its members are const, which means that any ranges you get from its members will be const, making such ranges useless.That is so weird to hear considering I added ranges to my C++ code and my Vector<T>::all() const can easily return non-const range even tho container is itself const. This kinda looks like D is more limited in that area than C++... Or I really am not getting something.
Sep 20 2013
On Fri, Sep 20, 2013 at 11:04:10AM -0700, Jonathan M Davis wrote: [...]So, in D, if you have a member function that returns a range, the only way that it can return a non-const range is if it's able to convert the fully const range to a tail-const one (i.e. the range itself is non-const but what it refers to is still const). That can easily be done by creating an entirely new range from the original (e.g. convert it to an array and return that), but converting a const range to a tail-const one is very difficult to do with user- defined types. To do that, we need to be able to convert const(Range!Foo) to Range!(const Foo), and thanks to how templates work those are completely different template instantations (meaning that technically, they don't necessarily have anything do with each other - e.g. their internal layout could be completely different), so the compiler can't assume that that conversion will work. You have to write it yourself, which quickly runs into problems with recursive template instantiations and the like. I believe that it can be done with the proper use of static if, but it does mean that you have to know what you're doing. It's not straightforward, and the last time that I attempted it, I failed. This is in stark contrast to arrays, where the compiler knows fulwell that it can convert const(int[]) to const(int)[] without causing any problems.[...] Yeah, this is one of the areas of D's const system that irks me. It's annoying to deal with even outside of ranges. For example, if you want to traverse the container (from inside a member function), if the member function is const, then all references to the container's internal objects are const: class Tree { Node* root; const(Node)* find() const { // curNode is inferred as const(Node*), i.e., // the pointer itself is const, since root is a // member of const(Tree). auto curNode = root; while (someCond) { // NG: can't modify a const ptr curNode = curNode.left; } else { // NG: can't modify a const ptr curNode = curNode.right; } ... // NG: can't convert const(Node*) to // const(Node)*. return curNode; } } The workaround is to explicitly state the type of curNode as tail-const: const(Node)* curNode = root; ... This is a very simple case, and it's already ugly. Once the code gets more complex, the problem becomes uglier. For example, because inout is treated just like const, inout member functions suffer from the same problem. When you then throw in user-defined types that have const/immutable members, the waters get even murkier. T -- Bare foot: (n.) A device for locating thumb tacks on the floor.
Sep 20 2013
On 9/20/13 9:46 AM, Szymon Gatner wrote:On Friday, 20 September 2013 at 16:38:29 UTC, Andrei Alexandrescu wrote:I'd agree. As an aside, they don't have to make sense. AndreiOn 9/20/13 8:13 AM, Joseph Rushton Wakeling wrote:What is even the purpose of const ranges? It makes little sense imho.On 20/09/13 16:48, H. S. Teoh wrote:Well you can't call popFront on any const range.A container should not be confused with a range. That way leads to dragons. :-P (It's rather unfortunate that built-in arrays conflate the two, it leads to a lot of wrong code that works only with arrays but not with "real" ranges.)Built-in arrays are not _always_ ranges. Consider const(int[]) ... as I found out recently, it's _not_ a range, because you can't popFront on a const entity.
Sep 20 2013
On Fri, Sep 20, 2013 at 05:13:45PM +0200, Joseph Rushton Wakeling wrote:On 20/09/13 16:48, H. S. Teoh wrote:Which makes it even more confusing, since newbies would probably equate arrays (the container) with ranges from their frequent use in range examples. Perhaps it's more useful to think of T[] not as an array per se, but as a *slice* of the underlying array data which is managed by druntime. I think I'm OK with saying that arrays (i.e. the underlying data) are containers, while slices (what we think of today as "arrays") are ranges. Of course, the distinction isn't that clear cut, because appending to a "slice" creates a new array and returns a slice of that, so there's still some murky areas here. T -- Life would be easier if I had the source code. -- YHLA container should not be confused with a range. That way leads to dragons. :-P (It's rather unfortunate that built-in arrays conflate the two, it leads to a lot of wrong code that works only with arrays but not with "real" ranges.)Built-in arrays are not _always_ ranges. Consider const(int[]) ... as I found out recently, it's _not_ a range, because you can't popFront on a const entity.
Sep 20 2013
On 20/09/13 17:22, H. S. Teoh wrote:Which makes it even more confusing, since newbies would probably equate arrays (the container) with ranges from their frequent use in range examples.Yes, it's completely non-obvious. I think the first time I realized the range/container distinction was when I tried experimentally replacing some built-in arrays with std.container.Array and discovered that I couldn't use them in a foreach loop.Perhaps it's more useful to think of T[] not as an array per se, but as a *slice* of the underlying array data which is managed by druntime. I think I'm OK with saying that arrays (i.e. the underlying data) are containers, while slices (what we think of today as "arrays") are ranges.It's not a bad description, but I'm not sure that takes into account the const(T[]) case.
Sep 20 2013
On Fri, Sep 20, 2013 at 05:42:03PM +0200, Joseph Rushton Wakeling wrote:On 20/09/13 17:22, H. S. Teoh wrote:Sad to say, I encountered a good number of Phobos bugs caused by the conflation between built-in arrays and ranges. Code would inadvertently assume array behaviour on ranges, and break when you pass in a non-array range. Some of these have been fixed; I'm pretty sure there are still more lurking around.Which makes it even more confusing, since newbies would probably equate arrays (the container) with ranges from their frequent use in range examples.Yes, it's completely non-obvious. I think the first time I realized the range/container distinction was when I tried experimentally replacing some built-in arrays with std.container.Array and discovered that I couldn't use them in a foreach loop.Actually, it helps you understand the const(T[]) case. To iterate over a const array, you need a range over it (i.e., a slice); and indeed, writing arr[] on an array of type const(T[]) gives you a tail-const slice of type const(T)[], which *is* an iterable range. The confusion really just stems from the conflation between T[] and a range over T[] in the non-const case. T -- People tell me I'm stubborn, but I refuse to accept it!Perhaps it's more useful to think of T[] not as an array per se, but as a *slice* of the underlying array data which is managed by druntime. I think I'm OK with saying that arrays (i.e. the underlying data) are containers, while slices (what we think of today as "arrays") are ranges.It's not a bad description, but I'm not sure that takes into account the const(T[]) case.
Sep 20 2013
On 20/09/13 20:53, H. S. Teoh wrote:Sad to say, I encountered a good number of Phobos bugs caused by the conflation between built-in arrays and ranges. Code would inadvertently assume array behaviour on ranges, and break when you pass in a non-array range. Some of these have been fixed; I'm pretty sure there are still more lurking around.In my case, I discovered a number of bugs that stemmed from unittests where the default range type used was an array. In that case, it wasn't so much the conflation of range and container, but the fact that arrays are random-access ranges and so have the maximal set of range properties -- so code that _theoretically_ should have worked with input ranges was actually failing, because it wasn't being tested.Actually, it helps you understand the const(T[]) case. To iterate over a const array, you need a range over it (i.e., a slice); and indeed, writing arr[] on an array of type const(T[]) gives you a tail-const slice of type const(T)[], which *is* an iterable range. The confusion really just stems from the conflation between T[] and a range over T[] in the non-const case.Ahh, OK. You're right, that does help. Thank you! :-)
Sep 24 2013
On Friday, 20 September 2013 at 10:47:52 UTC, Dmitry Olshansky wrote:20-Sep-2013 14:00, Jacob Carlborg пишет:Iteration is a stateful process, ranges are related to the process of iteration not to containers. As you say state is useless for containers but is necessary to iteration and its context.On 2013-09-20 11:37, Szymon Gatner wrote:For Christ sake no, no and no. For starters range skips/drops elements when iterating, and thusly iteration has its own state that is useless for a container otherwise.If only std algorithms took containers (by that I mean things that container for accepts too) as arguments (and not iterators)... even in the form of new functions like foreach_all, transform_all etc.Can't a container be a range as well?
Sep 20 2013
20-Sep-2013 15:01, Szymon Gatner пишет:On Friday, 20 September 2013 at 10:47:52 UTC, Dmitry Olshansky wrote:A text-book example of self-destruction(?). Ranges (in particular Input/Forward) are not much above encapsulation of iteration, hence must contain that state required to iterate said elements. Which leads to the point that indeed containers have no business being ranges by themselves. The bottom line is: sort(container[]); vs sort(container); Where I hardly see how coupling containers with algorithms can bring even slightest benefit. -- Dmitry Olshansky20-Sep-2013 14:00, Jacob Carlborg пишет:Iteration is a stateful process, ranges are related to the process of iteration not to containers. As you say state is useless for containers but is necessary to iteration and its context.On 2013-09-20 11:37, Szymon Gatner wrote:For Christ sake no, no and no. For starters range skips/drops elements when iterating, and thusly iteration has its own state that is useless for a container otherwise.If only std algorithms took containers (by that I mean things that container for accepts too) as arguments (and not iterators)... even in the form of new functions like foreach_all, transform_all etc.Can't a container be a range as well?
Sep 20 2013
On Friday, 20 September 2013 at 13:32:47 UTC, Dmitry Olshansky wrote:20-Sep-2013 15:01, Szymon Gatner пишет:OK so it seems we agree. I never said that containers should be ranges. Ranges are abstraction on iterators and that is it. Single container can have multiple ranges existing at the same time. State is attached to ranges not containers. And most importantly ranges are mutable always, even if underlying container isn't. Ranges are meant to have state. No idea what you mean by self destruction.On Friday, 20 September 2013 at 10:47:52 UTC, Dmitry Olshansky wrote:A text-book example of self-destruction(?). Ranges (in particular Input/Forward) are not much above encapsulation of iteration, hence must contain that state required to iterate said elements. Which leads to the point that indeed containers have no business being ranges by themselves. The bottom line is: sort(container[]); vs sort(container); Where I hardly see how coupling containers with algorithms can bring even slightest benefit.20-Sep-2013 14:00, Jacob Carlborg пишет:Iteration is a stateful process, ranges are related to the process of iteration not to containers. As you say state is useless for containers but is necessary to iteration and its context.On 2013-09-20 11:37, Szymon Gatner wrote:For Christ sake no, no and no. For starters range skips/drops elements when iterating, and thusly iteration has its own state that is useless for a container otherwise.If only std algorithms took containers (by that I mean things that container for accepts too) as arguments (and not iterators)... even in the form of new functions like foreach_all, transform_all etc.Can't a container be a range as well?
Sep 20 2013
20-Sep-2013 17:50, Szymon Gatner пишет:On Friday, 20 September 2013 at 13:32:47 UTC, Dmitry Olshansky wrote:[snip]20-Sep-2013 15:01, Szymon Gatner пишет:On Friday, 20 September 2013 at 10:47:52 UTC, Dmitry Olshansky wrote:Then it may be a misunderstanding on my part. I was referring to your previous reply. Where I basically said:A text-book example of self-destruction(?). Ranges (in particular Input/Forward) are not much above encapsulation of iteration, hence must contain that state required to iterate said elements. Which leads to the point that indeed containers have no business being ranges by themselves. The bottom line is: sort(container[]); vs sort(container); Where I hardly see how coupling containers with algorithms can bring even slightest benefit.OK so it seems we agree. I never said that containers should be ranges. Ranges are abstraction on iterators and that is it. Single container can have multiple ranges existing at the same time. State is attached to ranges not containers. And most importantly ranges are mutable always, even if underlying container isn't. Ranges are meant to have state. No idea what you mean by self destruction.Can't a container be a range as well?For Christ sake no, no and no.[... Because that would be ...]TL;DR: Suboptimal, unnatural and error prone are keywords.Then your question - Why would it be suboptimal? Which your second reply seem to clearly explain: extra state placed where it doesn't belong. I can't easily correlate your two answers as they look as if the second one answers questions of the first. Anyhow we are in agreement here. -- Dmitry Olshansky
Sep 20 2013
On Friday, 20 September 2013 at 14:04:06 UTC, Dmitry Olshansky wrote:20-Sep-2013 17:50, Szymon Gatner пишет:Mind that "Can't a container be a range as well?" was not from me. Still, moving computation over a range from for loop body into the range implementation (like filtering) incurs no performance penalty and has additional benefits of composability and maintainability. Do you mean that is suboptimal?On Friday, 20 September 2013 at 13:32:47 UTC, Dmitry Olshansky wrote:[snip]20-Sep-2013 15:01, Szymon Gatner пишет:On Friday, 20 September 2013 at 10:47:52 UTC, Dmitry Olshansky wrote:Then it may be a misunderstanding on my part. I was referring to your previous reply. Where I basically said:A text-book example of self-destruction(?). Ranges (in particular Input/Forward) are not much above encapsulation of iteration, hence must contain that state required to iterate said elements. Which leads to the point that indeed containers have no business being ranges by themselves. The bottom line is: sort(container[]); vs sort(container); Where I hardly see how coupling containers with algorithms can bring even slightest benefit.OK so it seems we agree. I never said that containers should be ranges. Ranges are abstraction on iterators and that is it. Single container can have multiple ranges existing at the same time. State is attached to ranges not containers. And most importantly ranges are mutable always, even if underlying container isn't. Ranges are meant to have state. No idea what you mean by self destruction.Can't a container be a range as well?For Christ sake no, no and no.[... Because that would be ...]TL;DR: Suboptimal, unnatural and error prone are keywords.Then your question - Why would it be suboptimal? Which your second reply seem to clearly explain: extra state placed where it doesn't belong. I can't easily correlate your two answers as they look as if the second one answers questions of the first. Anyhow we are in agreement here.
Sep 20 2013
20-Sep-2013 18:13, Szymon Gatner пишет:On Friday, 20 September 2013 at 14:04:06 UTC, Dmitry Olshansky wrote:Yup, but it was what I was answering ... when you asked about why would it be suboptimal... A container that is a range at the same time is suboptimal, that's all.Mind that "Can't a container be a range as well?" was not from me.Can't a container be a range as well?For Christ sake no, no and no.[... Because that would be ...]TL;DR: Suboptimal, unnatural and error prone are keywords.Then your question - Why would it be suboptimal? Which your second reply seem to clearly explain: extra state placed where it doesn't belong. I can't easily correlate your two answers as they look as if the second one answers questions of the first. Anyhow we are in agreement here.Still, moving computation over a range from for loop body into the range implementation (like filtering) incurs no performance penalty and has additional benefits of composability and maintainability. Do you mean that is suboptimal?I made no statement on this I think. It shouldn't be ineffective. The only problem with it that I foresee is that in order to take advantage of vectorized SIMD in CPU it could be harder to auto-magically vectorize such code for compiler. For that matter I think e.g. providing explicitly a range of float4 fells like a better way to go. -- Dmitry Olshansky
Sep 20 2013
On Friday, 20 September 2013 at 15:09:57 UTC, Dmitry Olshansky wrote:20-Sep-2013 18:13, Szymon Gatner пишет:Yup, we agree then :) I think I add to confusion also because I think about C++ and not D. I didn't realize that arrays/slices are ranges in D. That is not good indeed.On Friday, 20 September 2013 at 14:04:06 UTC, Dmitry Olshansky wrote:Yup, but it was what I was answering ... when you asked about why would it be suboptimal... A container that is a range at the same time is suboptimal, that's all.Mind that "Can't a container be a range as well?" was not from me.Can't a container be a range as well?For Christ sake no, no and no.[... Because that would be ...]TL;DR: Suboptimal, unnatural and error prone are keywords.Then your question - Why would it be suboptimal? Which your second reply seem to clearly explain: extra state placed where it doesn't belong. I can't easily correlate your two answers as they look as if the second one answers questions of the first. Anyhow we are in agreement here.Still, moving computation over a range from for loop body into the range implementation (like filtering) incurs no performance penalty and has additional benefits of composability and maintainability. Do you mean that is suboptimal?I made no statement on this I think. It shouldn't be ineffective. The only problem with it that I foresee is that in order to take advantage of vectorized SIMD in CPU it could be harder to auto-magically vectorize such code for compiler. For that matter I think e.g. providing explicitly a range of float4 fells like a better way to go.
Sep 20 2013
On Fri, 20 Sep 2013 14:47:38 +0400 Dmitry Olshansky <dmitry.olsh gmail.com> wrote:For Christ sake no, no and no. For starters range skips/drops elements when iterating, and thusly iteration has its own state that is useless for a container otherwise. The idea of "contange" spreads like virus, no matter how abominable the end result is. The fact that slices sometimes look like containers (BTW ill-suited for anything beyond primitives/PODish stuff ) must be the corner stone of this belief. Strengthened on the motto of trying to make user defined types to mimic behavior of built-ins it leads to a school of thought that it's fine to blend ownership and access.I can vouch that ease of conflating array/slice/range, while normally a wonderful convenience, has indeed led me into confusion when trying to make a custom type range-compatible. I felt naturally inclined to add the range primitives to the type itself, but when that led to issues like you described (especially the fact that a range *consumes* its elements while iterating, plus the inability to provide alternate views of the same data - which is a very powerful tool IMO), I finally started to grok that a container needs to *provide* a range, and not actually *be* one. Well, except maybe for output ranges. (I think?)TL;DR: Suboptimal, unnatural and error prone are keywords.They are? Cool! auto foo(T)(real a, unnatural b, lazy Suboptimal!T opts) {...} Looks fun! :)
Sep 20 2013
20-Sep-2013 15:08, Nick Sabalausky пишет:On Fri, 20 Sep 2013 14:47:38 +0400 Dmitry Olshansky <dmitry.olsh gmail.com> wrote:Aye. I finally started to grok that a containerFor Christ sake no, no and no. For starters range skips/drops elements when iterating, and thusly iteration has its own state that is useless for a container otherwise. The idea of "contange" spreads like virus, no matter how abominable the end result is. The fact that slices sometimes look like containers (BTW ill-suited for anything beyond primitives/PODish stuff ) must be the corner stone of this belief. Strengthened on the motto of trying to make user defined types to mimic behavior of built-ins it leads to a school of thought that it's fine to blend ownership and access.I can vouch that ease of conflating array/slice/range, while normally a wonderful convenience, has indeed led me into confusion when trying to make a custom type range-compatible. I felt naturally inclined to add the range primitives to the type itself, but when that led to issues like you described (especially the fact that a range *consumes* its elements while iterating, plus the inability to provide alternate views of the same data - which is a very powerful tool IMO),needs to *provide* a range, and not actually *be* one.This ^^ I can't stress that last statement enough.Well, except maybe for output ranges. (I think?)Output ranges in my mind may wrap a suitable container by ref. I think that adding 'put' to container is not a bad idea. It would essentially do insertAny(..) implying that order of insertion is not interesting. However since there are many ways of inserting items (insertFront/insertBack?) it makes more sense in general to have simple helper sink. backInserter ? Silly but simple to remember.Yes :) 1. Suboptimal - extra state to worry about, with mutation backed-in. Harder path to immutability/shared. 2. Unnatural - the reason to have ranges in the first place? Now we pass containers to algorithms ... The decoupling is lost. 3. Error prone - unclean semantics. Does iterating a container mutate it? If not what about iterating it twice in a loop (inner-outer)? Is there a copy somewhere? Copy of what? I can go on for days here.TL;DR: Suboptimal, unnatural and error prone are keywords.They are? Cool!auto foo(T)(real a, unnatural b, lazy Suboptimal!T opts) {...} Looks fun! :)-- Dmitry Olshansky
Sep 20 2013
On 9/20/13 3:47 AM, Dmitry Olshansky wrote:20-Sep-2013 14:00, Jacob Carlborg пишет:Agreed. A container is a range factory. AndreiOn 2013-09-20 11:37, Szymon Gatner wrote:For Christ sake no, no and no. For starters range skips/drops elements when iterating, and thusly iteration has its own state that is useless for a container otherwise. The idea of "contange" spreads like virus, no matter how abominable the end result is. The fact that slices sometimes look like containers (BTW ill-suited for anything beyond primitives/PODish stuff ) must be the corner stone of this belief. Strengthened on the motto of trying to make user defined types to mimic behavior of built-ins it leads to a school of thought that it's fine to blend ownership and access. TL;DR: Suboptimal, unnatural and error prone are keywords.If only std algorithms took containers (by that I mean things that container for accepts too) as arguments (and not iterators)... even in the form of new functions like foreach_all, transform_all etc.Can't a container be a range as well?
Sep 20 2013
On 9/20/13, Szymon Gatner <noemail gmail.com> wrote:I had similar thoughts when watching GoingNaive 2013.Yeah. For example, Andrei's talk was very entertaining, but it was filled with "well I'm using this trick here, but I'll explain this later" bits, which just made following the whole thing really difficult, and it's all because C++ has become a monster.
Sep 20 2013