www.digitalmars.com         C & C++   DMDScript  

digitalmars.D - foreach thoughts

reply Manu <turkeyman gmail.com> writes:
So, the last few days, 2 things have been coming up constantly.

foreach is like, the best thing about D. But I often get caught by 2 little
details that result in my having to create a bunch more visual noise.

1. A termination condition (ie, while)

foreach(t; things) iterates each thing, but it's common in traditional for
loops to have an && in the second 'while' term, to add an additional
termination condition.
for(i=0; i<things.length && things[i].someCondition; ++i)

Or with foreach:
foreach(i, t; things)
{
  if(t.someCondition)
    break;
  ...
}

I often feel like I want something like this where I would have the
opportunity to add that additional term I lose with a traditional for loop:
  foreach(i, t; things; t.someCondition)


2. A filter

The other thing is the ability to skip uninteresting elements. This is
typically performed with the first line of the loop testing a condition,
and then continue:
foreach(i, t; things)
{
  if(!t.isInteresting)
    continue;
  ...
}


I'm finding in practise that at least one of these seems to pop up on the
vast majority of loops I'm writing. It produces a lot of visual noise, and
I'm finding it's quite a distraction from the otherwise relative tidiness
of my code.

How have others dealt with this? I wonder if it's worth exploring a 3rd
term in the foreach statement?

I've tried to approach the problem with std.algorithm, but I find the
std.algorithm statement to be much more noisy and usually longer when the
loops are sufficiently simple (as they usually are in my case, which is why
the trivial conditions are so distracting by contrast).
I also find that the exclamation mark overload in typical std.algorithm
statements tends to visually obscure the trivial condition I'm wanting to
insert in the first place.
It's also really hard to debug std.algorithm statements, you can't step the
debugger anymore.

Also, I have to import std.algorithm, which then imports the universe... >_<
Jan 14 2014
next sibling parent "deadalnix" <deadalnix gmail.com> writes:
On Tuesday, 14 January 2014 at 08:23:05 UTC, Manu wrote:
 So, the last few days, 2 things have been coming up constantly.

 foreach is like, the best thing about D. But I often get caught 
 by 2 little
 details that result in my having to create a bunch more visual 
 noise.

 1. A termination condition (ie, while)

 foreach(t; things) iterates each thing, but it's common in 
 traditional for
 loops to have an && in the second 'while' term, to add an 
 additional
 termination condition.
 for(i=0; i<things.length && things[i].someCondition; ++i)

 Or with foreach:
 foreach(i, t; things)
 {
   if(t.someCondition)
     break;
   ...
 }

 I often feel like I want something like this where I would have 
 the
 opportunity to add that additional term I lose with a 
 traditional for loop:
   foreach(i, t; things; t.someCondition)
until
 2. A filter

 The other thing is the ability to skip uninteresting elements. 
 This is
 typically performed with the first line of the loop testing a 
 condition,
 and then continue:
 foreach(i, t; things)
 {
   if(!t.isInteresting)
     continue;
   ...
 }


 I'm finding in practise that at least one of these seems to pop 
 up on the
 vast majority of loops I'm writing. It produces a lot of visual 
 noise, and
 I'm finding it's quite a distraction from the otherwise 
 relative tidiness
 of my code.
filter
 How have others dealt with this? I wonder if it's worth 
 exploring a 3rd
 term in the foreach statement?
We have the tool to build ranges that does what you ask for and feed foreach with them.
Jan 14 2014
prev sibling next sibling parent reply "Jakob Ovrum" <jakobovrum gmail.com> writes:
On Tuesday, 14 January 2014 at 08:23:05 UTC, Manu wrote:
 1. A termination condition (ie, while)

 foreach(t; things) iterates each thing, but it's common in 
 traditional for
 loops to have an && in the second 'while' term, to add an 
 additional
 termination condition.
 for(i=0; i<things.length && things[i].someCondition; ++i)

 Or with foreach:
 foreach(i, t; things)
 {
   if(t.someCondition)
     break;
   ...
 }
foreach(t; things.until!(t => t.someCondition)) { } Unfortunately foreach over a range does not automatically support an index loop variable. We could add something like std.range.enumerate to support this, but I think it's a common enough requirement that a language amendment is warranted (there are some subtleties involved in implementing it though - specifically when combined with automatic tuple expansion).
 2. A filter

 The other thing is the ability to skip uninteresting elements. 
 This is
 typically performed with the first line of the loop testing a 
 condition,
 and then continue:
 foreach(i, t; things)
 {
   if(!t.isInteresting)
     continue;
   ...
 }
foreach(t; things.filter!(t => t.isInteresting)) { } Ditto about the index loop variable.
 I've tried to approach the problem with std.algorithm, but I 
 find the
 std.algorithm statement to be much more noisy and usually 
 longer when the
 loops are sufficiently simple (as they usually are in my case, 
 which is why
 the trivial conditions are so distracting by contrast).
The two examples above look a *lot* cleaner and less noisy (declarative!) to me than the imperative approach using if-break or if-continue.
 I also find that the exclamation mark overload in typical 
 std.algorithm
 statements tends to visually obscure the trivial condition I'm 
 wanting to
 insert in the first place.
You'll have to get used to the exclamation mark, otherwise you'll never be able to fully appreciate D's generic programming. I quite like it - I don't think there's anything objectively ugly about it.
 Also, I have to import std.algorithm, which then imports the 
 universe... >_<
This is fixable. We shouldn't reach for language changes to compensate for library deficiencies.
Jan 14 2014
next sibling parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Tuesday, 14 January 2014 at 08:36:53 UTC, Jakob Ovrum wrote:
 You'll have to get used to the exclamation mark, otherwise 
 you'll never be able to fully appreciate D's generic 
 programming. I quite like it - I don't think there's anything 
 objectively ugly about it.
You subjectively think that there is nothing objectively ugly about it? :-) It is objectively ugly because "!" implies a boolean expression, but that is off-topic. I agree that chaining of filters and sorting rules is a good solution, provided that you have a high level optimizer capable of transforming the chain into something optimal. You basically need some sort of term-rewriting.
Jan 14 2014
next sibling parent reply "Jakob Ovrum" <jakobovrum gmail.com> writes:
On Tuesday, 14 January 2014 at 08:43:37 UTC, Ola Fosheim Grøstad 
wrote:
 On Tuesday, 14 January 2014 at 08:36:53 UTC, Jakob Ovrum wrote:
 You'll have to get used to the exclamation mark, otherwise 
 you'll never be able to fully appreciate D's generic 
 programming. I quite like it - I don't think there's anything 
 objectively ugly about it.
You subjectively think that there is nothing objectively ugly about it? :-) It is objectively ugly because "!" implies a boolean expression, but that is off-topic.
This argument is stupid. It's the same argument as the famous "Yeah, well, that's just, like, your opinion, man", or the "please put `I think...` in front of all your sentences!" argument. The burden of proof is on the person who first claims it's deficient. Saying that I can't think of anything objectively wrong with it serves the purpose of inviting Manu to provide some kind of argument, as I can't think of anything *obviously* wrong about it that goes unsaid. It's common to overload tokens in programming languages, and it's usually only a problem for beginners who aren't used to the particular language's choice of overloads yet (Ruby is good example of a language with rather extreme token reuse) - humans are pretty good at context-sensitive parsing. From a character-by-character perspective it's particularly common, with the bitwise shift operators having nothing to do with comparisons, bitwise xor having nothing to do with exponents etc. Regardless of whether binary ! is "ugly" or not, it's still better than introducing language features left and right to avoid templates, and it's still better than C++'s template instantiation syntax :)
 I agree that chaining of filters and sorting rules is a good 
 solution, provided that you have a high level optimizer capable 
 of transforming the chain into something optimal. You basically 
 need some sort of term-rewriting.
LDC and GDC are capable of unravelling the (fairly thin) abstraction. All it requires is the ability to inline direct function calls to small functions - the aforementioned compilers always have this capability for templated functions. DMD is hit and miss, but I think there was a recent improvement to its inliner... luckily this is still the domain of micro-optimization.
Jan 14 2014
next sibling parent reply "Jakob Ovrum" <jakobovrum gmail.com> writes:
On Tuesday, 14 January 2014 at 09:06:23 UTC, Jakob Ovrum wrote:

 It's common to overload tokens in programming languages, and 
 it's usually only a problem for beginners who aren't used to 
 the particular language's choice of overloads yet (Ruby is good 
 example of a language with rather extreme token reuse) - humans 
 are pretty good at context-sensitive parsing. From a 
 character-by-character perspective it's particularly common, 
 with the bitwise shift operators having nothing to do with 
 comparisons, bitwise xor having nothing to do with exponents 
 etc.
BTW, I am not implying Manu is a beginner, I suspect he had something more specific in mind when he mentioned the instantiation syntax.
Jan 14 2014
parent Manu <turkeyman gmail.com> writes:
On 14 January 2014 19:09, Jakob Ovrum <jakobovrum gmail.com> wrote:

 On Tuesday, 14 January 2014 at 09:06:23 UTC, Jakob Ovrum wrote:

  It's common to overload tokens in programming languages, and it's usually
 only a problem for beginners who aren't used to the particular language's
 choice of overloads yet (Ruby is good example of a language with rather
 extreme token reuse) - humans are pretty good at context-sensitive parsing.
 From a character-by-character perspective it's particularly common, with
 the bitwise shift operators having nothing to do with comparisons, bitwise
 xor having nothing to do with exponents etc.
BTW, I am not implying Manu is a beginner, I suspect he had something more specific in mind when he mentioned the instantiation syntax.
Thank you... I was going to comment, but I restrained myself ;)
Jan 14 2014
prev sibling next sibling parent reply Manu <turkeyman gmail.com> writes:
On 14 January 2014 19:06, Jakob Ovrum <jakobovrum gmail.com> wrote:

 On Tuesday, 14 January 2014 at 08:43:37 UTC, Ola Fosheim Gr=C3=B8stad wro=
te:
 On Tuesday, 14 January 2014 at 08:36:53 UTC, Jakob Ovrum wrote:

 You'll have to get used to the exclamation mark, otherwise you'll never
 be able to fully appreciate D's generic programming. I quite like it - =
I
 don't think there's anything objectively ugly about it.
You subjectively think that there is nothing objectively ugly about it? :-) It is objectively ugly because "!" implies a boolean expression, but that is off-topic.
This argument is stupid. It's the same argument as the famous "Yeah, well=
,
 that's just, like, your opinion, man", or the "please put `I think...` in
 front of all your sentences!" argument.

 The burden of proof is on the person who first claims it's deficient.
 Saying that I can't think of anything objectively wrong with it serves th=
e
 purpose of inviting Manu to provide some kind of argument, as I can't thi=
nk
 of anything *obviously* wrong about it that goes unsaid.

 It's common to overload tokens in programming languages, and it's usually
 only a problem for beginners who aren't used to the particular language's
 choice of overloads yet (Ruby is good example of a language with rather
 extreme token reuse) - humans are pretty good at context-sensitive parsin=
g.
 From a character-by-character perspective it's particularly common, with
 the bitwise shift operators having nothing to do with comparisons, bitwis=
e
 xor having nothing to do with exponents etc.

 Regardless of whether binary ! is "ugly" or not, it's still better than
 introducing language features left and right to avoid templates, and it's
 still better than C++'s template instantiation syntax :)
Personally, I generally like the '!' syntax, except when it's in conjunction with lambda's where it can kinda ruins the statements a bit. I agree that chaining of filters and sorting rules is a good solution,
 provided that you have a high level optimizer capable of transforming th=
e
 chain into something optimal. You basically need some sort of
 term-rewriting.
LDC and GDC are capable of unravelling the (fairly thin) abstraction. All it requires is the ability to inline direct function calls to small functions - the aforementioned compilers always have this capability for templated functions. DMD is hit and miss, but I think there was a recent improvement to its inliner... luckily this is still the domain of micro-optimization.
I'm quite concerned about unoptimised performance too. I know that's unusual, but I'm often left wondering what to do about it. Imagine, iterating over an array, if I use .filter(lambda) or something, there's now an additional 2 function calls at least per iteration, as opposed to the original zero. If this is idiomatic (I'd like to think it should be since it's fairly tidy), then we commit to a serious performance problem in unoptimised code. Liberal use of .empty and things too is a problem in unoptimised code... :/
Jan 14 2014
parent "Jakob Ovrum" <jakobovrum gmail.com> writes:
On Tuesday, 14 January 2014 at 09:21:59 UTC, Manu wrote:

 Personally, I generally like the '!' syntax, except when it's in
 conjunction with lambda's where it can kinda ruins the 
 statements a bit.
I know what you mean, but I think it's more due to the extra parentheses than the exclamation mark, in which case UFCS really helps, e.g. array.map!(...).filter!(...).copy(sink); When this is not an option (due to passing more than one argument), splitting the expression into multiple lines still helps.
 I'm quite concerned about unoptimised performance too. I know 
 that's
 unusual, but I'm often left wondering what to do about it.

 Imagine, iterating over an array, if I use .filter(lambda) or 
 something,
 there's now an additional 2 function calls at least per 
 iteration, as
 opposed to the original zero. If this is idiomatic (I'd like to 
 think it
 should be since it's fairly tidy), then we commit to a serious 
 performance
 problem in unoptimised code. Liberal use of .empty and things 
 too is a
 problem in unoptimised code... :/
I think it is the price we pay for choosing function-based generic programming (that is, it's not a problem for mixin-based generics). Idiomatic C++ has the same compromise (except, of course, the mixin approach is only available as the less powerful text preprocessor).
Jan 14 2014
prev sibling parent "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Tuesday, 14 January 2014 at 09:06:23 UTC, Jakob Ovrum wrote:
 Regardless of whether binary ! is "ugly" or not, it's still 
 better than introducing language features left and right to 
 avoid templates, and it's still better than C++'s template 
 instantiation syntax :)
C++'s templates are horrible for other reasons! D's metaprogramming is much more readable (except for template instantiation, of course ;^)
 LDC and GDC are capable of unravelling the (fairly thin) 
 abstraction. All it requires is the ability to inline direct 
 function calls to small functions - the aforementioned 
 compilers always have this capability for templated functions.
But what happens when you chain multiple modifying functions? A high level optimizer can have heuristics to transform a complex expression into some kind of normal form which the lower level have heuristics to deal with.
 DMD is hit and miss, but I think there was a recent improvement 
 to its inliner... luckily this is still the domain of 
 micro-optimization.
Not really! I think you need high level optimization in order to have an efficient generic programming environment. Like, if you want to encapsulate generators. (e.g. if you ask a factory for a filtered and sorted list, which returns a generator, then you need more filters, then you send it to another function which is generic and applies sort again etc)
Jan 14 2014
prev sibling parent reply Timon Gehr <timon.gehr gmx.ch> writes:
On 01/14/2014 09:43 AM, "Ola Fosheim Grøstad" 
<ola.fosheim.grostad+dlang gmail.com>" wrote:
 objectively ugly
 because "!" implies a boolean expression, but that is
 off-topic.
This is the binary usage of "!". It does not imply a boolean expression similarly to how a * b does not imply an expression on pointers.
Jan 14 2014
parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Tuesday, 14 January 2014 at 12:55:45 UTC, Timon Gehr wrote:
 This is the binary usage of "!". It does not imply a boolean 
 expression similarly to how a * b does not imply an expression 
 on pointers.
Are you claiming that the C dereference-operator is a good usability design? If so we really have nothing to discuss! :)
Jan 14 2014
parent reply "Meta" <jared771 gmail.com> writes:
On Tuesday, 14 January 2014 at 14:27:02 UTC, Ola Fosheim Grøstad 
wrote:
 On Tuesday, 14 January 2014 at 12:55:45 UTC, Timon Gehr wrote:
 This is the binary usage of "!". It does not imply a boolean 
 expression similarly to how a * b does not imply an expression 
 on pointers.
Are you claiming that the C dereference-operator is a good usability design? If so we really have nothing to discuss! :)
I'm not sure if you're genuinely misunderstanding him or trying as hard as possible to intentionally misrepresent his point.
Jan 14 2014
parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Tuesday, 14 January 2014 at 14:38:18 UTC, Meta wrote:
 I'm not sure if you're genuinely misunderstanding him or trying 
 as hard as possible to intentionally misrepresent his point.
I don't think unary/binary is a good enough distinction when the semantics are orthogonal.
Jan 14 2014
parent reply "Dominikus Dittes Scherkl" writes:
On Tuesday, 14 January 2014 at 15:02:19 UTC, Ola Fosheim Grøstad 
wrote:
 On Tuesday, 14 January 2014 at 14:38:18 UTC, Meta wrote:
 I'm not sure if you're genuinely misunderstanding him or 
 trying as hard as possible to intentionally misrepresent his 
 point.
I don't think unary/binary is a good enough distinction when the semantics are orthogonal.
I think it is. It would be far worse if the semantics were related.
Jan 14 2014
parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Tuesday, 14 January 2014 at 15:34:40 UTC, Dominikus Dittes 
Scherkl wrote:
 I think it is. It would be far worse if the semantics were 
 related.
No. Array sub script and array initialization are both array-like, so no confusion. Same with "+" for concatenation (if you don't do implicit casting) and addition, they are both additive-like. The human mind categorize in a fuzzy fashion.
Jan 14 2014
parent reply Timon Gehr <timon.gehr gmx.ch> writes:
On 01/14/2014 04:40 PM, "Ola Fosheim Grøstad" 
<ola.fosheim.grostad+dlang gmail.com>" wrote:
 On Tuesday, 14 January 2014 at 15:34:40 UTC, Dominikus Dittes Scherkl
 wrote:
 I think it is. It would be far worse if the semantics were related.
No. Array sub script and array initialization are both array-like, so no confusion. Same with "+" for concatenation (if you don't do implicit casting) and addition, they are both additive-like.
Nope. They are both monoid operations. Addition is commutative, and it is conventional in abstract algebra to use "+" to denote commutative operations only. eg. http://en.wikipedia.org/wiki/Abelian_group#Notation . assert("123abc" == "123" + "abc"); hurts my eyes.
 The human mind categorize in a fuzzy fashion.
Jan 14 2014
parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Tuesday, 14 January 2014 at 16:32:55 UTC, Timon Gehr wrote:
 assert("123abc" == "123" + "abc"); hurts my eyes.
Doesn't matter. The human brain does not work with algebra, it's not a logic engine. It is a fuzzy abstraction engine. Besides, in unary notation you get "111" for 3, "11111" for 5 etc, but this is rather off-topic...
Jan 14 2014
parent Timon Gehr <timon.gehr gmx.ch> writes:
On 01/14/2014 05:50 PM, "Ola Fosheim Grøstad" 
<ola.fosheim.grostad+dlang gmail.com>" wrote:
 On Tuesday, 14 January 2014 at 16:32:55 UTC, Timon Gehr wrote:
 assert("123abc" == "123" + "abc"); hurts my eyes.
Doesn't matter.
Thanks.
Jan 14 2014
prev sibling next sibling parent reply Manu <turkeyman gmail.com> writes:
On 14 January 2014 18:36, Jakob Ovrum <jakobovrum gmail.com> wrote:

 On Tuesday, 14 January 2014 at 08:23:05 UTC, Manu wrote:

 1. A termination condition (ie, while)

 foreach(t; things) iterates each thing, but it's common in traditional for
 loops to have an && in the second 'while' term, to add an additional
 termination condition.
 for(i=0; i<things.length && things[i].someCondition; ++i)

 Or with foreach:
 foreach(i, t; things)
 {
   if(t.someCondition)
     break;
   ...
 }
foreach(t; things.until!(t => t.someCondition)) { } Unfortunately foreach over a range does not automatically support an index loop variable. We could add something like std.range.enumerate to support this, but I think it's a common enough requirement that a language amendment is warranted (there are some subtleties involved in implementing it though - specifically when combined with automatic tuple expansion). 2. A filter
 The other thing is the ability to skip uninteresting elements. This is
 typically performed with the first line of the loop testing a condition,
 and then continue:
 foreach(i, t; things)
 {
   if(!t.isInteresting)
     continue;
   ...
 }
foreach(t; things.filter!(t => t.isInteresting)) { } Ditto about the index loop variable. I've tried to approach the problem with std.algorithm, but I find the
 std.algorithm statement to be much more noisy and usually longer when the
 loops are sufficiently simple (as they usually are in my case, which is
 why
 the trivial conditions are so distracting by contrast).
The two examples above look a *lot* cleaner and less noisy (declarative!) to me than the imperative approach using if-break or if-continue.
/agree completely. This is nice, I didn't think of writing statements like that :) That's precisely the sort of suggestion I was hoping for. I'll continue like this.
Jan 14 2014
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 1/14/14 1:04 AM, Manu wrote:
 /agree completely.
 This is nice, I didn't think of writing statements like that :)
 That's precisely the sort of suggestion I was hoping for. I'll continue
 like this.
I think I died and got to heaven. Andrei
Jan 14 2014
parent Manu <turkeyman gmail.com> writes:
On 15 January 2014 03:08, Andrei Alexandrescu <SeeWebsiteForEmail erdani.org
 wrote:
 On 1/14/14 1:04 AM, Manu wrote:

 /agree completely.
 This is nice, I didn't think of writing statements like that :)
 That's precisely the sort of suggestion I was hoping for. I'll continue
 like this.
I think I died and got to heaven.
The problem is I initially tried to use an all-out std.algorithm approach, with map and reduce and stuff...
Jan 14 2014
prev sibling next sibling parent reply Manu <turkeyman gmail.com> writes:
On 14 January 2014 19:04, Manu <turkeyman gmail.com> wrote:

 On 14 January 2014 18:36, Jakob Ovrum <jakobovrum gmail.com> wrote:

 On Tuesday, 14 January 2014 at 08:23:05 UTC, Manu wrote:

 1. A termination condition (ie, while)

 foreach(t; things) iterates each thing, but it's common in traditional
 for
 loops to have an && in the second 'while' term, to add an additional
 termination condition.
 for(i=0; i<things.length && things[i].someCondition; ++i)

 Or with foreach:
 foreach(i, t; things)
 {
   if(t.someCondition)
     break;
   ...
 }
foreach(t; things.until!(t => t.someCondition)) { } Unfortunately foreach over a range does not automatically support an index loop variable. We could add something like std.range.enumerate to support this, but I think it's a common enough requirement that a language amendment is warranted (there are some subtleties involved in implementing it though - specifically when combined with automatic tuple expansion). 2. A filter
 The other thing is the ability to skip uninteresting elements. This is
 typically performed with the first line of the loop testing a condition,
 and then continue:
 foreach(i, t; things)
 {
   if(!t.isInteresting)
     continue;
   ...
 }
foreach(t; things.filter!(t => t.isInteresting)) { } Ditto about the index loop variable. I've tried to approach the problem with std.algorithm, but I find the
 std.algorithm statement to be much more noisy and usually longer when the
 loops are sufficiently simple (as they usually are in my case, which is
 why
 the trivial conditions are so distracting by contrast).
The two examples above look a *lot* cleaner and less noisy (declarative!) to me than the imperative approach using if-break or if-continue.
/agree completely. This is nice, I didn't think of writing statements like that :) That's precisely the sort of suggestion I was hoping for. I'll continue like this.
Can anyone comment on the codegen when using these statements? Is it identical to my reference 'if' statement? Liberal use of loops like this will probably obliterate unoptimised performance... :/
Jan 14 2014
next sibling parent reply "John Colvin" <john.loughran.colvin gmail.com> writes:
On Tuesday, 14 January 2014 at 09:07:43 UTC, Manu wrote:
 Can anyone comment on the codegen when using these statements? 
 Is it
 identical to my reference 'if' statement?
 Liberal use of loops like this will probably obliterate 
 unoptimised
 performance... :/
In my experience, unoptimised performance will suffer due to the various function calls and intermediate structs which will be generated naively as-written. Generic range programming in D depends heavily on optimisers to denest everything, which gdc and ldc do manage quite successfully. Am I correct in thinking that you want (near)playably fast debug builds? I think that will require some extra thought to achieve in idiomatic D compared to your usual C++ work. pragma(optimise) anyone?
Jan 14 2014
parent reply Manu <turkeyman gmail.com> writes:
On 14 January 2014 20:25, John Colvin <john.loughran.colvin gmail.com>wrote:

 On Tuesday, 14 January 2014 at 09:07:43 UTC, Manu wrote:

 Can anyone comment on the codegen when using these statements? Is it
 identical to my reference 'if' statement?
 Liberal use of loops like this will probably obliterate unoptimised
 performance... :/
In my experience, unoptimised performance will suffer due to the various function calls and intermediate structs which will be generated naively as-written. Generic range programming in D depends heavily on optimisers to denest everything, which gdc and ldc do manage quite successfully. Am I correct in thinking that you want (near)playably fast debug builds? I think that will require some extra thought to achieve in idiomatic D compared to your usual C++ work. pragma(optimise) anyone?
Sorry, that was 2 separate points although it didn't look like it. The first question was assuming full optimisation, is it equivalent to the if statement I demonstrate with full optimisation in practise? (I'll do some tests, but people must have already experimented in various circumstances, and identified patterns?) The second comment about non-optimised builds; you are correct, it's important that debug builds remain useful/playable. I have worked in codebases where debug builds run in the realm of 1-2 fps, literally unplayable and entirely useless.. This is a MASSIVE productivity hinderance. By contrast, at krome, we took great pride in that our debug build ran at around 10-15fps (actually playable, and quite a unique achievement which did not go unappreciated), which meant we could actually debug complex bugs while still being able to reproduce them. This was almost entirely attributed to our (unpopular) insistence on writing our engine in C instead of C++. Sadly, I don't see D being any better than C++ in this case. It's likely worse than C++, since it's more convenient, and therefore more compelling to use these meta features. I'd like to know if anybody can see any path towards inlined lambda's/literal's/micro-functions in non-optimised builds? I guess the first question is, is this a problem that is even worth addressing? How many people are likely to object in principle?
Jan 14 2014
parent reply "Dicebot" <public dicebot.lv> writes:
On Tuesday, 14 January 2014 at 15:37:21 UTC, Manu wrote:
 Sorry, that was 2 separate points although it didn't look like 
 it.
 The first question was assuming full optimisation, is it 
 equivalent to the
 if statement I demonstrate with full optimisation in practise? 
 (I'll do
 some tests, but people must have already experimented in various
 circumstances, and identified patterns?)
Don't think so. LDC is best at doing such transformation per my observations but it still major are for improvements in general.
 I'd like to know if anybody can see any path towards inlined
 lambda's/literal's/micro-functions in non-optimised builds?
 I guess the first question is, is this a problem that is even 
 worth
 addressing? How many people are likely to object in principle?
I'd object against it general. Important trait of such builds is resulting code gen that matches original source code as much as possible. Mass inlining will kill it. But I don't see why you use "debug" and "non-optimised" as synonyms here. For example, in DMD "-release" vs "-debug" are orthogonal to "-O". I see no problems with doing optimised debug build.
Jan 14 2014
next sibling parent reply "Dicebot" <public dicebot.lv> writes:
Some experimental data.

---- test program ----

auto input()
{
     int[] arr;

     foreach (i; 0..6)
     {
         import std.stdio;
         int x;
         readf(" %d ", &x);
         arr ~= x;
     }

     return arr;
}

int main()
{
     import std.algorithm;

     auto arr = input();
     int sum;

     // version algo
     foreach (elem; arr.filter!( x => x % 2 ))
     {
         sum += elem;
     }

     // version raw
     foreach (elem; arr)
     {
         if (elem % 2)
             sum += elem;
     }

     return sum;
}

------

Disassemblies for latest LDC (ldmd2 -O -release -inline):

$ cat test-ldc-algo.dump
0000000000402eb0 <_Dmain>:
   402eb0:	50                   	push   %rax
   402eb1:	e8 fa fe ff ff       	callq  402db0 <_D4test5inputFZAi>
   402eb6:	48 89 c1             	mov    %rax,%rcx
   402eb9:	31 c0                	xor    %eax,%eax
   402ebb:	48 85 c9             	test   %rcx,%rcx
   402ebe:	74 4d                	je     402f0d <_Dmain+0x5d>
   402ec0:	f6 02 01             	testb  $0x1,(%rdx)
   402ec3:	75 0b                	jne    402ed0 <_Dmain+0x20>
   402ec5:	48 83 c2 04          	add    $0x4,%rdx
   402ec9:	48 ff c9             	dec    %rcx
   402ecc:	75 f2                	jne    402ec0 <_Dmain+0x10>
   402ece:	eb 3d                	jmp    402f0d <_Dmain+0x5d>
   402ed0:	31 c0                	xor    %eax,%eax
   402ed2:	eb 34                	jmp    402f08 <_Dmain+0x58>
   402ed4:	66 66 66 2e 0f 1f 84 	data32 data32 nopw 
%cs:0x0(%rax,%rax,1)
   402edb:	00 00 00 00 00
   402ee0:	03 02                	add    (%rdx),%eax
   402ee2:	66 66 66 66 66 2e 0f 	data32 data32 data32 data32 nopw 
%cs:0x0(%rax,%rax,1)
   402ee9:	1f 84 00 00 00 00 00
   402ef0:	48 85 c9             	test   %rcx,%rcx
   402ef3:	74 1a                	je     402f0f <_Dmain+0x5f>
   402ef5:	48 83 f9 01          	cmp    $0x1,%rcx
   402ef9:	74 12                	je     402f0d <_Dmain+0x5d>
   402efb:	48 ff c9             	dec    %rcx
   402efe:	f6 42 04 01          	testb  $0x1,0x4(%rdx)
   402f02:	48 8d 52 04          	lea    0x4(%rdx),%rdx
   402f06:	74 e8                	je     402ef0 <_Dmain+0x40>
   402f08:	48 85 c9             	test   %rcx,%rcx
   402f0b:	75 d3                	jne    402ee0 <_Dmain+0x30>
   402f0d:	5a                   	pop    %rdx
   402f0e:	c3                   	retq
   402f0f:	bf e0 0a 68 00       	mov    $0x680ae0,%edi
   402f14:	be 9e 01 00 00       	mov    $0x19e,%esi
   402f19:	e8 c2 c2 02 00       	callq  42f1e0 <_d_array_bounds>
   402f1e:	66 90                	xchg   %ax,%ax

$ cat test-ldc-raw.dump
0000000000402eb0 <_Dmain>:
   402eb0:	50                   	push   %rax
   402eb1:	e8 fa fe ff ff       	callq  402db0 <_D4test5inputFZAi>
   402eb6:	48 89 c1             	mov    %rax,%rcx
   402eb9:	31 c0                	xor    %eax,%eax
   402ebb:	48 85 c9             	test   %rcx,%rcx
   402ebe:	74 17                	je     402ed7 <_Dmain+0x27>
   402ec0:	8b 3a                	mov    (%rdx),%edi
   402ec2:	89 fe                	mov    %edi,%esi
   402ec4:	c1 e6 1f             	shl    $0x1f,%esi
   402ec7:	c1 fe 1f             	sar    $0x1f,%esi
   402eca:	21 fe                	and    %edi,%esi
   402ecc:	01 f0                	add    %esi,%eax
   402ece:	48 83 c2 04          	add    $0x4,%rdx
   402ed2:	48 ff c9             	dec    %rcx
   402ed5:	75 e9                	jne    402ec0 <_Dmain+0x10>
   402ed7:	5a                   	pop    %rdx
   402ed8:	c3                   	retq
   402ed9:	0f 1f 80 00 00 00 00 	nopl   0x0(%rax)

So it does inline but end result is still less optimal 
assembly-wise.
Jan 14 2014
next sibling parent reply "David Nadlinger" <code klickverbot.at> writes:
On Tuesday, 14 January 2014 at 16:31:00 UTC, Dicebot wrote:
 So it does inline but end result is still less optimal 
 assembly-wise.
The problem, by the way, seems to be that the program actually contains two loops that LLVM cannot merge: One in the filter() constructor (skipping the initial run of even elements), and the actual foreach loop in main(). David
Jan 14 2014
parent reply "Peter Alexander" <peter.alexander.au gmail.com> writes:
On Tuesday, 14 January 2014 at 18:38:27 UTC, David Nadlinger 
wrote:
 On Tuesday, 14 January 2014 at 16:31:00 UTC, Dicebot wrote:
 So it does inline but end result is still less optimal 
 assembly-wise.
The problem, by the way, seems to be that the program actually contains two loops that LLVM cannot merge: One in the filter() constructor (skipping the initial run of even elements), and the actual foreach loop in main(). David
I wonder if there is some way the language could help with that in the common case. For example, provide an opApply for all non-infinite ranges that can be used with foreach instead of front/empty/popFront. I imagine opApply is much easier to inline and optimise than trying to piece range operations and state back together to see the underlying loop.
Jan 14 2014
parent "monarch_dodra" <monarchdodra gmail.com> writes:
On Tuesday, 14 January 2014 at 19:05:22 UTC, Peter Alexander 
wrote:
 On Tuesday, 14 January 2014 at 18:38:27 UTC, David Nadlinger 
 wrote:
 On Tuesday, 14 January 2014 at 16:31:00 UTC, Dicebot wrote:
 So it does inline but end result is still less optimal 
 assembly-wise.
The problem, by the way, seems to be that the program actually contains two loops that LLVM cannot merge: One in the filter() constructor (skipping the initial run of even elements), and the actual foreach loop in main(). David
I wonder if there is some way the language could help with that in the common case. For example, provide an opApply for all non-infinite ranges that can be used with foreach instead of front/empty/popFront. I imagine opApply is much easier to inline and optimise than trying to piece range operations and state back together to see the underlying loop.
Actually, you could give an opApply for infinite ranges too. foreach loops can have breaks, and opApply knows how to handle them.
Jan 14 2014
prev sibling parent reply Manu <turkeyman gmail.com> writes:
On 15 January 2014 02:30, Dicebot <public dicebot.lv> wrote:

 Some experimental data.

 ---- test program ----
 ...
 So it does inline but end result is still less optimal assembly-wise.
Right, thanks for that. I'm quite surprised by how bad that turned out actually, and with LDC, which is usually the best at optimising that sort of thing. Need to do some intensive experimentation... but this is a bit concerning. Cheers.
Jan 14 2014
parent reply "David Nadlinger" <code klickverbot.at> writes:
On Wednesday, 15 January 2014 at 01:42:07 UTC, Manu wrote:
 Right, thanks for that.
 I'm quite surprised by how bad that turned out actually, and 
 with LDC,
 which is usually the best at optimising that sort of thing.
 Need to do some intensive experimentation... but this is a bit 
 concerning.
GCC might do some loop merging, I haven't checked. It's just that this pattern doesn't tend to appear too much in traditional C/C++ code (after all, who splits up their loops into two parts just for fun?), so it could be that the LLVM people just never really bothered to write a pass to merge single loops that have been split (as opposed to classical loop fusion, where the loop ranges are the same, but the operations performed/target data different). Maybe it would be possible to implement something like this fairly easily using the existing LLVM loop analyses though. David
Jan 14 2014
next sibling parent Manu <turkeyman gmail.com> writes:
On 15 January 2014 12:22, David Nadlinger <code klickverbot.at> wrote:

 On Wednesday, 15 January 2014 at 01:42:07 UTC, Manu wrote:

 Right, thanks for that.
 I'm quite surprised by how bad that turned out actually, and with LDC,
 which is usually the best at optimising that sort of thing.
 Need to do some intensive experimentation... but this is a bit concerning.
GCC might do some loop merging, I haven't checked. It's just that this pattern doesn't tend to appear too much in traditional C/C++ code (after all, who splits up their loops into two parts just for fun?), so it could be that the LLVM people just never really bothered to write a pass to merge single loops that have been split (as opposed to classical loop fusion, where the loop ranges are the same, but the operations performed/target data different). Maybe it would be possible to implement something like this fairly easily using the existing LLVM loop analyses though.
Okay. As long as the problem is understood and a solution seems realistic. The closure allocation is also an important problem to fix, but that one's been on the list a long time.
Jan 14 2014
prev sibling parent reply "deadalnix" <deadalnix gmail.com> writes:
On Wednesday, 15 January 2014 at 02:22:22 UTC, David Nadlinger 
wrote:
 On Wednesday, 15 January 2014 at 01:42:07 UTC, Manu wrote:
 Right, thanks for that.
 I'm quite surprised by how bad that turned out actually, and 
 with LDC,
 which is usually the best at optimising that sort of thing.
 Need to do some intensive experimentation... but this is a bit 
 concerning.
GCC might do some loop merging, I haven't checked. It's just that this pattern doesn't tend to appear too much in traditional C/C++ code (after all, who splits up their loops into two parts just for fun?), so it could be that the LLVM people just never really bothered to write a pass to merge single loops that have been split (as opposed to classical loop fusion, where the loop ranges are the same, but the operations performed/target data different). Maybe it would be possible to implement something like this fairly easily using the existing LLVM loop analyses though. David
Actually this is a useful optimization technique when you need to skip a lot of objects that way. See http://www.onversity.com/load/d-loop.pdf
Jan 14 2014
parent "David Nadlinger" <code klickverbot.at> writes:
On Wednesday, 15 January 2014 at 07:08:14 UTC, deadalnix wrote:
 Actually this is a useful optimization technique when you need 
 to skip a lot of objects that way. See 
 http://www.onversity.com/load/d-loop.pdf
Well, yes, expect for the fact that the case discussed here isn't an instance of the optimization. Even the "pre-skip" loop performs two conditional jumps per array element. And worse, the form the loop is in causes LLVM to miss two additional optimization opportunities, compared to the plain C (D) version: First, LLVM actually optimizes away the even/odd conditional branch in the plain version, replacing it with a shl/sar/and combination. The rotated loop structure in the filter version seems to cause the optimizer to miss that opportunity. And second, as a consequence the loop in the filter version is no longer a good candidate for vectorization, whereas LLVM will happily emit AVX/… instructions in the plain case if you let it. David
Jan 15 2014
prev sibling parent reply Manu <turkeyman gmail.com> writes:
On 15 January 2014 02:16, Dicebot <public dicebot.lv> wrote:

 On Tuesday, 14 January 2014 at 15:37:21 UTC, Manu wrote:

 Sorry, that was 2 separate points although it didn't look like it.
 The first question was assuming full optimisation, is it equivalent to the
 if statement I demonstrate with full optimisation in practise? (I'll do
 some tests, but people must have already experimented in various
 circumstances, and identified patterns?)
Don't think so. LDC is best at doing such transformation per my observations but it still major are for improvements in general.
So you're saying that using these primitives like until! and filter! will not produce the same code as my if statement when optimised? That's worrying. I'll have to try it out. I'd like to know if anybody can see any path towards inlined
 lambda's/literal's/micro-functions in non-optimised builds?
 I guess the first question is, is this a problem that is even worth
 addressing? How many people are likely to object in principle?
I'd object against it general. Important trait of such builds is resulting code gen that matches original source code as much as possible. Mass inlining will kill it. But I don't see why you use "debug" and "non-optimised" as synonyms here. For example, in DMD "-release" vs "-debug" are orthogonal to "-O". I see no problems with doing optimised debug build.
You can't step a -O build, or inspect the value of most variables. So you can't really debug.
Jan 14 2014
parent "Kagamin" <spam here.lot> writes:
On Wednesday, 15 January 2014 at 01:28:26 UTC, Manu wrote:
 You can't step a -O build, or inspect the value of most 
 variables. So you
 can't really debug.
The problem with optimizations is that code mutates a lot. Can you expect the debugger to step through inlined functions? Loop merging can be even more evil in this regard.
Jan 16 2014
prev sibling next sibling parent "deadalnix" <deadalnix gmail.com> writes:
On Tuesday, 14 January 2014 at 09:07:43 UTC, Manu wrote:
 Can anyone comment on the codegen when using these statements? 
 Is it
 identical to my reference 'if' statement?
 Liberal use of loops like this will probably obliterate 
 unoptimised
 performance... :/
I can't comment for every compiler, but when using LLVM as backend, here is what to expect. LLVM is really good at inlining. However, it inline mostly bottom-up, which mean that the front of the subrange will be inlined in the outer front, popFront of the subrange will be inlined in the popFront of the outer range, etc... You'll generally have pretty good performance, but 2 bad scenarios can happen. For most ranges, the compiler will find good optimization between front/popFront/empty and friends. But the way this stuff will be inlined, you'll get bigger and bigger front/popFront/empty until they all are inlined in the loop and everything is simplified away by the optimizer. If you wrap enough range into one another, you'll reach a point where the compiler will consider that these method became too big to be good candidate to inline. Things should improve over time as LLVM team is working on a better top down inliner. The second caveat is automatic heap promotion of the closure frame pointer. Right now, optimizers aren't really good at heap to stack promotion in D as they mostly don't understand the runtime (LDC have some progress in that direction, but this is still quite simplistic). That mean that you can end up with unwanted heap allocation. Realistically, it is possible for a compiler to see through any reasonably size range code, but for now, it is kind of lacking in some directions.
Jan 14 2014
prev sibling parent reply Marco Leise <Marco.Leise gmx.de> writes:
Am Tue, 14 Jan 2014 19:07:33 +1000
schrieb Manu <turkeyman gmail.com>:

 On 14 January 2014 19:04, Manu <turkeyman gmail.com> wrote:
 
 On 14 January 2014 18:36, Jakob Ovrum <jakobovrum gmail.com> wrote:

 On Tuesday, 14 January 2014 at 08:23:05 UTC, Manu wrote:

 1. A termination condition (ie, while)

 foreach(t; things) iterates each thing, but it's common in traditional
 for
 loops to have an && in the second 'while' term, to add an additional
 termination condition.
 for(i=0; i<things.length && things[i].someCondition; ++i)

 Or with foreach:
 foreach(i, t; things)
 {
   if(t.someCondition)
     break;
   ...
 }
foreach(t; things.until!(t => t.someCondition)) { } Unfortunately foreach over a range does not automatically support an index loop variable. We could add something like std.range.enumerate to support this, but I think it's a common enough requirement that a language amendment is warranted (there are some subtleties involved in implementing it though - specifically when combined with automatic tuple expansion). 2. A filter
 The other thing is the ability to skip uninteresting elements. This is
 typically performed with the first line of the loop testing a condition,
 and then continue:
 foreach(i, t; things)
 {
   if(!t.isInteresting)
     continue;
   ...
 }
foreach(t; things.filter!(t => t.isInteresting)) { } Ditto about the index loop variable. I've tried to approach the problem with std.algorithm, but I find the
 std.algorithm statement to be much more noisy and usually longer when the
 loops are sufficiently simple (as they usually are in my case, which is
 why
 the trivial conditions are so distracting by contrast).
The two examples above look a *lot* cleaner and less noisy (declarative!) to me than the imperative approach using if-break or if-continue.
/agree completely. This is nice, I didn't think of writing statements like that :) That's precisely the sort of suggestion I was hoping for. I'll continue like this.
Can anyone comment on the codegen when using these statements? Is it identical to my reference 'if' statement? Liberal use of loops like this will probably obliterate unoptimised performance... :/
For a moment, I thought the performance crew lost you. :) D's foreach is awesome. Remember this benchmark? http://togototo.wordpress.com/2013/08/23/benchmarks-round-two-parallel-go-rust-d-scala-and-nimrod/ The top entry's performance is due to how well LDC2 optimized the foreach, otherwise the code is quite the same as the C++ version. -- Marco
Jan 15 2014
parent Manu <turkeyman gmail.com> writes:
On 16 January 2014 05:56, Marco Leise <Marco.Leise gmx.de> wrote:

 For a moment, I thought the performance crew lost you. :)
Haha, don't worry, that'll never happen! ;) I'm just trying to be more open minded in this code I'm writing. I bring a lot of C/C++ baggage to my D code, so I'm trying to write it in the most idiomatic D way I can, and then I'll do a thorough performance analysis later, and see how far south it went (and what the optimisers we able to do in practise)... Experience from C/C++ makes me over-cautious wrt performance these days. Modern optimisers are more reliable than the ones I've spent a decade trying to wrestle to do what I want them to. That said though, I still believe there's value in writing explicitly fast code. If you're quite direct, then it doesn't rely so much on an optimiser, and more importantly, it will continue to run well-ish in non-optimised builds. I'm really wrestling with that; it's not a property I'm comfortable to give up, and all these little templates everywhere that do trivial little things are probably going to interfere with non-optimised performance in a way that exceeds my worst nightmares in C++. D's foreach is awesome. Remember this benchmark?
 http://togototo.wordpress.com/2013/08/23/benchmarks-round-two-parallel-go-rust-d-scala-and-nimrod/
 The top entry's performance is due to how well LDC2 optimized
 the foreach, otherwise the code is quite the same as the C++
 version.
Indeed, and I had just presumed that LDC would properly optimise these little functions we're discussing. Rather surprised to learn it doesn't. Hopefully it's not so hard to improve with some time.
Jan 15 2014
prev sibling parent reply Jacob Carlborg <doob me.com> writes:
On 2014-01-14 09:36, Jakob Ovrum wrote:

 You'll have to get used to the exclamation mark, otherwise you'll never
 be able to fully appreciate D's generic programming. I quite like it - I
 don't think there's anything objectively ugly about it.
Or, in this case, fix the compiler to be able to inline delegates. Or is there any other advantage of using an alias parameter instead of a delegate? -- /Jacob Carlborg
Jan 14 2014
parent reply Timon Gehr <timon.gehr gmx.ch> writes:
On 01/14/2014 11:09 AM, Jacob Carlborg wrote:
 On 2014-01-14 09:36, Jakob Ovrum wrote:

 You'll have to get used to the exclamation mark, otherwise you'll never
 be able to fully appreciate D's generic programming. I quite like it - I
 don't think there's anything objectively ugly about it.
Or, in this case, fix the compiler to be able to inline delegates. Or is there any other advantage of using an alias parameter instead of a delegate?
1. It is likely to result in faster code when inlining fails. Eg. it will often not allocate when the delegate would, due to nested template instantiation simplifying escape analysis. 2. IFTI limitations. Eg. the following cannot work with the current language unless 'map' is specialized to ranges of int: [1,2,3].map(x=>2*x)
Jan 14 2014
parent "Jakob Ovrum" <jakobovrum gmail.com> writes:
On Tuesday, 14 January 2014 at 13:04:58 UTC, Timon Gehr wrote:
 On 01/14/2014 11:09 AM, Jacob Carlborg wrote:
 On 2014-01-14 09:36, Jakob Ovrum wrote:

 You'll have to get used to the exclamation mark, otherwise 
 you'll never
 be able to fully appreciate D's generic programming. I quite 
 like it - I
 don't think there's anything objectively ugly about it.
Or, in this case, fix the compiler to be able to inline delegates. Or is there any other advantage of using an alias parameter instead of a delegate?
1. It is likely to result in faster code when inlining fails. Eg. it will often not allocate when the delegate would, due to nested template instantiation simplifying escape analysis. 2. IFTI limitations. Eg. the following cannot work with the current language unless 'map' is specialized to ranges of int: [1,2,3].map(x=>2*x)
3. Lazy ranges don't have to carry around delegates. 4. Algorithms can specialize depending on the argument function. However, this is currently only possibly when string lambdas are used... (e.g. std.algorithm.startsWith does this) 5. Alias parameters receiving function templates have the benefit of being able to instantiate the function template argument multiple times with different template arguments. This is murky territory when it comes to function literals with inferred parameter types though - they are not guaranteed to be implemented in terms of function templates.
Jan 14 2014
prev sibling next sibling parent reply "Dicebot" <public dicebot.lv> writes:
I also think proposed syntax does not give any advantage, not 
worth any language change. If there are any optimization issues 
with std.algorithm those must be fixed instead. Idiomatic D is 
all about ranges so such use case must be as efficient a possible.
Jan 14 2014
parent reply Manu <turkeyman gmail.com> writes:
On 14 January 2014 19:22, Dicebot <public dicebot.lv> wrote:

 I also think proposed syntax does not give any advantage, not worth any
 language change. If there are any optimization issues with std.algorithm
 those must be fixed instead. Idiomatic D is all about ranges so such use
 case must be as efficient a possible.
Perhaps __forceinline in the future may create some opportunities to do these things without creating a crap load of superfluous function calls.
Jan 14 2014
parent "Dicebot" <public dicebot.lv> writes:
On Tuesday, 14 January 2014 at 09:28:49 UTC, Manu wrote:
 Perhaps __forceinline in the future may create some 
 opportunities to do
 these things without creating a crap load of superfluous 
 function calls.
I don't think this is the case for __forceinline - it is a problem with generic optimizer logic if such trivial wrappers don't get inlined. Former is specialized power tool, latter benefits every single program out there.
Jan 14 2014
prev sibling next sibling parent reply "bearophile" <bearophileHUGS lycos.com> writes:
Manu:

 So, the last few days, 2 things have been coming up constantly.

 foreach is like, the best thing about D. But I often get caught 
 by 2 little
 details that result in my having to create a bunch more visual 
 noise.
I am against the suggested changes, because they are too much not-orthogonal. One of the few things that I'd like to change in foreach is support for anonymous loops: foreach(;0..10) I'd like an enumerate() in Phobos. I think the current tuple unpacking in foreach is a very limited and partially broken feature that should be deprecated as soon as possible (and later replaced with something better and more general). I think the ! for algorithms and ranges add noise, sometimes I forget them, and I'd like to not need them, but I think they are not going away. So what I'd like are more specific and more clean error messages when I forget them. As D programs use more and more algorithm UFCS chains, D compilers will need to optimize that kind of code better, adding specific high-level and mid-level optimizations (rewrite rules, deforestations, etc). Bye, bearophile
Jan 14 2014
parent reply "Tobias Pankrath" <tobias pankrath.net> writes:
On Tuesday, 14 January 2014 at 09:55:42 UTC, bearophile wrote:
 As D programs use more and more algorithm UFCS chains, D 
 compilers will need to optimize that kind of code better, 
 adding specific high-level and mid-level optimizations (rewrite 
 rules, deforestations, etc).
You came up with deforestation several times now, so I digged up this paper from wikipedia http://homepages.inf.ed.ac.uk/wadler/papers/deforest/deforest.ps I skimmed it and it seems that the advantage is elimination of intermediate data structures (i.e. lists), which is std.algorithm achieves by design. Could you elaborate how deforestation might apply to std.algorithm?
Jan 14 2014
parent Timon Gehr <timon.gehr gmx.ch> writes:
On 01/14/2014 11:09 AM, Tobias Pankrath wrote:
 On Tuesday, 14 January 2014 at 09:55:42 UTC, bearophile wrote:
 As D programs use more and more algorithm UFCS chains, D compilers
 will need to optimize that kind of code better, adding specific
 high-level and mid-level optimizations (rewrite rules, deforestations,
 etc).
You came up with deforestation several times now, so I digged up this paper from wikipedia http://homepages.inf.ed.ac.uk/wadler/papers/deforest/deforest.ps I skimmed it and it seems that the advantage is elimination of intermediate data structures (i.e. lists), which is std.algorithm achieves by design. Could you elaborate how deforestation might apply to std.algorithm?
I guess it would be about eliminating intermediate arrays.
Jan 14 2014
prev sibling parent reply "Daniel Murphy" <yebbliesnospam gmail.com> writes:
foreach(thought; thoughts)
  if(!thought.isInteresting)
    delete thought;
Jan 14 2014
next sibling parent Manu <turkeyman gmail.com> writes:
On 15 January 2014 02:56, Daniel Murphy <yebbliesnospam gmail.com> wrote:

 foreach(thought; thoughts)
  if(!thought.isInteresting)
    delete thought;
That's a redundant filter, your code effectively does nothing ;)
Jan 14 2014
prev sibling parent "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Wed, Jan 15, 2014 at 11:38:13AM +1000, Manu wrote:
 On 15 January 2014 02:56, Daniel Murphy <yebbliesnospam gmail.com> wrote:
 
 foreach(thought; thoughts)
  if(!thought.isInteresting)
    delete thought;
That's a redundant filter, your code effectively does nothing ;)
And 'delete' is deprecated. :-P T -- Bomb technician: If I'm running, try to keep up.
Jan 14 2014