www.digitalmars.com         C & C++   DMDScript  

digitalmars.D - C++ Ranges proposal for the Standard Library

reply "ZombineDev" <valid_email he.re> writes:
I saw [this][0] proposal for adding ranges to C++'s standard
library. The [paper][1] looks at D style ranges, but concludes:

 Since iterators can implement D ranges, but D ranges cannot be 
 used to implement iterators, we conclude that iterators form a 
 more powerful and foundational basis.
What do you guys think? [0]: https://isocpp.org/blog/2014/10/ranges [1]: https://ericniebler.github.io/std/wg21/D4128.html
Oct 17 2014
next sibling parent reply Marco Leise <Marco.Leise gmx.de> writes:
Am Fri, 17 Oct 2014 09:17:51 +0000
schrieb "ZombineDev" <valid_email he.re>:

 I saw [this][0] proposal for adding ranges to C++'s standard
 library. The [paper][1] looks at D style ranges, but concludes:
 
 Since iterators can implement D ranges, but D ranges cannot be 
 used to implement iterators, we conclude that iterators form a 
 more powerful and foundational basis.
What do you guys think? [0]: https://isocpp.org/blog/2014/10/ranges [1]: https://ericniebler.github.io/std/wg21/D4128.html
True. Iterators are more foundational, ranges are more neat-o. ;) When you look at this C++ function as part of a signal processing home work you know why you don't want to see them in top level code: // C++ double mittelwert(const vector<double>& vektor) { vector<double>::const_iterator it; double summe = 0; for (it = vektor.begin(); it != vektor.end(); ++it) { summe += *it; } return summe / vektor.size(); } // D (removing iterators) double mittelwert(in double[] vektor) { double summe = 0; foreach (wert; vektor) { summe += wert; } return summe / vektor.length; } // D (using range sum function) double mittelwert(in double[] vektor) { return sum(vektor) / vektor.length; } -- Marco
Oct 17 2014
next sibling parent reply "Olivier Grant" <olivier.grant gmail.com> writes:
On Friday, 17 October 2014 at 09:52:26 UTC, Marco Leise wrote:
 Am Fri, 17 Oct 2014 09:17:51 +0000
 schrieb "ZombineDev" <valid_email he.re>:

 I saw [this][0] proposal for adding ranges to C++'s standard
 library. The [paper][1] looks at D style ranges, but concludes:
 
 Since iterators can implement D ranges, but D ranges cannot 
 be used to implement iterators, we conclude that iterators 
 form a more powerful and foundational basis.
What do you guys think? [0]: https://isocpp.org/blog/2014/10/ranges [1]: https://ericniebler.github.io/std/wg21/D4128.html
True. Iterators are more foundational, ranges are more neat-o. ;) When you look at this C++ function as part of a signal processing home work you know why you don't want to see them in top level code: // C++ double mittelwert(const vector<double>& vektor) { vector<double>::const_iterator it; double summe = 0; for (it = vektor.begin(); it != vektor.end(); ++it) { summe += *it; } return summe / vektor.size(); } // D (removing iterators) double mittelwert(in double[] vektor) { double summe = 0; foreach (wert; vektor) { summe += wert; } return summe / vektor.length; } // D (using range sum function) double mittelwert(in double[] vektor) { return sum(vektor) / vektor.length; }
No, the equivalent implementation in C++ is this: double mittelwert(const vector<double>& vektor) { double summe = 0; for(auto &x : vektor) { summe += x; } return summe / vektor.size(); }
Oct 17 2014
parent reply "eles" <eles eles.com> writes:
On Friday, 17 October 2014 at 10:10:23 UTC, Olivier Grant wrote:
 On Friday, 17 October 2014 at 09:52:26 UTC, Marco Leise wrote:
 Am Fri, 17 Oct 2014 09:17:51 +0000
 schrieb "ZombineDev" <valid_email he.re>:
 No, the equivalent implementation in C++ is this:
double mittelwert_accumulate(const vector<double>& vektor) { return accumulate(vektor.begin(), vektor.end(), 0.0) / vektor.size(); }
Oct 17 2014
parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Friday, 17 October 2014 at 10:17:38 UTC, eles wrote:
 double mittelwert_accumulate(const vector<double>& vektor)
 {
 	return accumulate(vektor.begin(), vektor.end(), 0.0) / 
 vektor.size();
 }
template<typename T> auto mittelwert_accumulate(const T& vektor){ return accumulate(cbegin(vektor), cend(vektor), 0.0) / distance(cbegin(a), cend(a)) } (not tested :^)
Oct 17 2014
parent reply "eles" <eles eles.com> writes:
On Friday, 17 October 2014 at 11:44:29 UTC, Ola Fosheim Grøstad 
wrote:
 On Friday, 17 October 2014 at 10:17:38 UTC, eles wrote:
 (not tested :^)
template<typename T> auto mittelwert_accumulate2(const T &vektor) ->typename T::value_type { return accumulate(vektor.cbegin(), vektor.cend(), static_cast<typename T::value_type>(0)) / distance(vektor.cbegin(), vektor.cend()); } (gcc does not yet support std::cbegin() and std::cend()). (tested :^)
Oct 17 2014
parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Friday, 17 October 2014 at 13:27:37 UTC, eles wrote:
 (gcc does not yet support std::cbegin() and std::cend()).

 (tested :^)
Thanks ;) Like your new version, but I don't think it will work with regular arrays and maybe the accumulator will overflow if you sum a large number of ubytes? I think C++14 will make stuff easier. :-)
Oct 17 2014
parent "eles" <eles215 gzk.dot> writes:
On Friday, 17 October 2014 at 14:01:46 UTC, Ola Fosheim Grøstad 
wrote:
 On Friday, 17 October 2014 at 13:27:37 UTC, eles wrote:
 (gcc does not yet support std::cbegin() and std::cend()).

 (tested :^)
Thanks ;) Like your new version, but I don't think it will work with regular arrays and maybe the accumulator will overflow
Just replace static_cast<typename T::value_type> with static_cast<U>, U being a second template argument that defaults to T::value_type. Then, you can control the size of the result, but you need to explicitely instantiate the templae with types. A more general and pleasant solution would be to use D ;)
Oct 17 2014
prev sibling parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Friday, 17 October 2014 at 09:52:26 UTC, Marco Leise wrote:
 True. Iterators are more foundational, ranges are more
 neat-o. ;)
Python is more accurate, succinct and generic :-) Fraction(Fraction(sum(a)),len(a)) or Fraction(sum([Fraction(n) for n in a]),len(a))
Oct 17 2014
parent reply Paulo Pinto <pjmlp progtools.org> writes:
Am 17.10.2014 um 17:14 schrieb "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= 
<ola.fosheim.grostad+dlang gmail.com>":
 On Friday, 17 October 2014 at 09:52:26 UTC, Marco Leise wrote:
 True. Iterators are more foundational, ranges are more
 neat-o. ;)
Python is more accurate, succinct and generic :-) Fraction(Fraction(sum(a)),len(a)) or Fraction(sum([Fraction(n) for n in a]),len(a))
And sloooowwwweeeerrr
Oct 17 2014
next sibling parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Friday, 17 October 2014 at 17:14:24 UTC, Paulo Pinto wrote:
 And sloooowwwweeeerrr
Accurate is slower, but not this: sum(a)/len(a)
Oct 17 2014
parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Friday, 17 October 2014 at 17:16:29 UTC, Ola Fosheim Grøstad 
wrote:
 Accurate is slower, but not this:

 sum(a)/len(a)
Forgot to point out that the original point with mentioning Python in the thread is: 1. Compiled static languages still have a long way to go with expressiveness. 2. Generic operations on double will lead to wrong results. What happens if your first value is very large? You loose the accumulation of the smaller values, in the worst case you only get the first value. Thus you will have to sort the values by exponent before accumulating or convert it into a different format. So yes, C++ iterators and D ranges are kind of cool, but cannot beat a well engineered library and a good mapping to it on the language side. (Python is of course not optimal in any way.)
Oct 17 2014
next sibling parent reply Paulo Pinto <pjmlp progtools.org> writes:
Am 17.10.2014 um 19:46 schrieb "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= 
<ola.fosheim.grostad+dlang gmail.com>":
 On Friday, 17 October 2014 at 17:16:29 UTC, Ola Fosheim Grøstad wrote:
 Accurate is slower, but not this:

 sum(a)/len(a)
Forgot to point out that the original point with mentioning Python in the thread is: 1. Compiled static languages still have a long way to go with expressiveness. 2. Generic operations on double will lead to wrong results. What happens if your first value is very large? You loose the accumulation of the smaller values, in the worst case you only get the first value. Thus you will have to sort the values by exponent before accumulating or convert it into a different format. So yes, C++ iterators and D ranges are kind of cool, but cannot beat a well engineered library and a good mapping to it on the language side. (Python is of course not optimal in any way.)
I don't think I would have those issues with ML languages. -- Paulo
Oct 17 2014
parent "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Friday, 17 October 2014 at 19:07:26 UTC, Paulo Pinto wrote:
 I don't think I would have those issues with ML languages.
That's probably true, ML has had some generic support for 40+ years?
Oct 17 2014
prev sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 10/17/2014 10:46 AM, "Ola Fosheim Grøstad" 
<ola.fosheim.grostad+dlang gmail.com>" wrote:
 On Friday, 17 October 2014 at 17:16:29 UTC, Ola Fosheim Grøstad wrote:
 Accurate is slower, but not this:

 sum(a)/len(a)
Forgot to point out that the original point with mentioning Python in the thread is: 1. Compiled static languages still have a long way to go with expressiveness.
D has closed that gap quite a bit.
 2. Generic operations on double will lead to wrong results. What happens if
your
 first value is very large? You loose the accumulation of the smaller values, in
 the worst case you only get the first value. Thus you will have to sort the
 values by exponent before accumulating or convert it into a different format.
Or use the much-hated much-misunderstood 80 bit reals! In any case, just insert a .sort operation. It's still generic.
 So yes, C++ iterators and D ranges are kind of cool, but cannot beat a well
 engineered library and a good mapping to it on the language side. (Python is of
 course not optimal in any way.)
80 bit reals can compensate for a lot of loss-of-precision issues.
Oct 17 2014
parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Saturday, 18 October 2014 at 04:35:07 UTC, Walter Bright wrote:
 1. Compiled static languages still have a long way to go with 
 expressiveness.
D has closed that gap quite a bit.
Better than C++, but C++ is gaining ground with C++14.
 Or use the much-hated much-misunderstood 80 bit reals!
Yikes! Those are legacy and only help a little bit.
 In any case, just insert a .sort operation. It's still generic.
"generic"! But a .sort isn't enough since the accumulator leads to lost precision. To improve that you have to add pairs that are close in exponent, and keep iterating, but still some losses. To do it right you could create 2048 accumulators (one for each exponent) and add the mantissas as integers. Not at all generic!
 80 bit reals can compensate for a lot of loss-of-precision 
 issues.
Larger mantissa can help a little bit, but only a little bit. You would need a 2000 bits mantissa to do it with a large dataset with a wide range of values. Doing perfect calculations with rational numbers and compare it to floating point is a nice exercise.
Oct 17 2014
parent reply "eles" <eles215 gzk.dot> writes:
On Saturday, 18 October 2014 at 05:54:01 UTC, Ola Fosheim Grøstad 
wrote:
 On Saturday, 18 October 2014 at 04:35:07 UTC, Walter Bright 
 wrote:
 Larger mantissa can help a little bit, but only a little bit.
https://en.wikipedia.org/wiki/Kahan_summation_algorithm
Oct 18 2014
next sibling parent reply "John Colvin" <john.loughran.colvin gmail.com> writes:
On Saturday, 18 October 2014 at 08:21:47 UTC, eles wrote:
 On Saturday, 18 October 2014 at 05:54:01 UTC, Ola Fosheim 
 Grøstad wrote:
 On Saturday, 18 October 2014 at 04:35:07 UTC, Walter Bright 
 wrote:
 Larger mantissa can help a little bit, but only a little bit.
https://en.wikipedia.org/wiki/Kahan_summation_algorithm
As that article also points out (IIRC), you can got pretty good results by divide-and-conquer without any extra work.
Oct 18 2014
parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Saturday, 18 October 2014 at 08:58:22 UTC, John Colvin wrote:
 As that article also points out (IIRC), you can got pretty good 
 results by divide-and-conquer without any extra work.
http://code.activestate.com/recipes/393090/ It is built into Python: import math a = [1e99,1.0,-1e99] print sum(a) 0.0 print math.fsum(a) 1.0
Oct 18 2014
parent "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Saturday, 18 October 2014 at 09:30:20 UTC, Ola Fosheim Grøstad 
wrote:
 On Saturday, 18 October 2014 at 08:58:22 UTC, John Colvin wrote:
 As that article also points out (IIRC), you can got pretty 
 good results by divide-and-conquer without any extra work.
http://code.activestate.com/recipes/393090/ It is built into Python:
Or more accurately: there is a C version built into Python called math.fsum() that is fairly accurate (except last bit of mantissa). I believe the link above provide slower versions that are accurate.
Oct 18 2014
prev sibling parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Saturday, 18 October 2014 at 08:21:47 UTC, eles wrote:
 On Saturday, 18 October 2014 at 05:54:01 UTC, Ola Fosheim 
 Grøstad wrote:
 On Saturday, 18 October 2014 at 04:35:07 UTC, Walter Bright 
 wrote:
 Larger mantissa can help a little bit, but only a little bit.
https://en.wikipedia.org/wiki/Kahan_summation_algorithm
Kahan helps a little bit: a = [math.pi*(n**9) for n in range(0,101)] +[-math.pi*(n**9) for n in range(100,0,-1)] print sum(a), kahan(a), fsum(a) -1389.4713685 239.156561184 0.0 but not a lot: a = [1e16,math.pi,-1e16] print sum(a), kahan(a), fsum(a) 4.0 4.0 3.14159265359 a = [1e17,math.pi,-1e17] print sum(a), kahan(a), fsum(a) 0.0 0.0 3.14159265359
Oct 18 2014
next sibling parent reply "John Colvin" <john.loughran.colvin gmail.com> writes:
On Saturday, 18 October 2014 at 10:44:59 UTC, Ola Fosheim Grøstad 
wrote:
 On Saturday, 18 October 2014 at 08:21:47 UTC, eles wrote:
 On Saturday, 18 October 2014 at 05:54:01 UTC, Ola Fosheim 
 Grøstad wrote:
 On Saturday, 18 October 2014 at 04:35:07 UTC, Walter Bright 
 wrote:
 Larger mantissa can help a little bit, but only a little bit.
https://en.wikipedia.org/wiki/Kahan_summation_algorithm
Kahan helps a little bit: a = [math.pi*(n**9) for n in range(0,101)] +[-math.pi*(n**9) for n in range(100,0,-1)] print sum(a), kahan(a), fsum(a) -1389.4713685 239.156561184 0.0 but not a lot: a = [1e16,math.pi,-1e16] print sum(a), kahan(a), fsum(a) 4.0 4.0 3.14159265359 a = [1e17,math.pi,-1e17] print sum(a), kahan(a), fsum(a) 0.0 0.0 3.14159265359
auto kahanSum(R)(R input) { double sum = 0.0; double c = 0.0; foreach(double el; input) { double y = el - c; double t = sum + y; c = (t - sum) - y; sum = t; } return sum; } import std.math; import std.range; import std.algorithm; import std.stdio; double pi = PI; void main() { auto a = chain( iota(101L).map!((x) => pi * x^^9), iota(101L).map!((x) => -pi * x^^9) ); writeln(kahanSum(a)); writeln(a.sum); } $ rdmd kahantest.d 0 -980
Oct 18 2014
next sibling parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Saturday, 18 October 2014 at 11:43:32 UTC, John Colvin wrote:
             iota(101L).map!((x) => pi * x^^9),
             iota(101L).map!((x) => -pi * x^^9)
Shouldn't the last expression be "-pi * (100-x)^^9" ?
Oct 18 2014
parent reply "John Colvin" <john.loughran.colvin gmail.com> writes:
On Saturday, 18 October 2014 at 11:56:15 UTC, Ola Fosheim Grøstad 
wrote:
 On Saturday, 18 October 2014 at 11:43:32 UTC, John Colvin wrote:
            iota(101L).map!((x) => pi * x^^9),
            iota(101L).map!((x) => -pi * x^^9)
Shouldn't the last expression be "-pi * (100-x)^^9" ?
Yeah, my mistake. auto a = chain( iota(1, 101L).map!((x) => pi * x^^9), iota(100L, 0L, -1L).map!((x) => -pi * x^^9) ).array; writeln(a.kahanSum); // 111.157 writeln(a.sum); // -1272 writeln(a.sort().kahanSum); // 0 writeln(a.sort().sum); // 760
Oct 18 2014
parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Saturday, 18 October 2014 at 12:16:24 UTC, John Colvin wrote:
 writeln(a.kahanSum);        // 111.157
 writeln(a.sum);             // -1272
 writeln(a.sort().kahanSum); // 0
Yes, but it is misleading, my test case was bad. Try to add a 1.0 element to the array. a.append(1.0) a = sorted(a) print sum(a), kahan(a), accurate(a) -1024.0 0.0 1.0
Oct 18 2014
parent "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Saturday, 18 October 2014 at 12:22:38 UTC, Ola Fosheim Grøstad 
wrote:
 On Saturday, 18 October 2014 at 12:16:24 UTC, John Colvin wrote:
 writeln(a.kahanSum);        // 111.157
 writeln(a.sum);             // -1272
 writeln(a.sort().kahanSum); // 0
Yes, but it is misleading, my test case was bad. Try to add a 1.0 element to the array.
Note: the sort should be over abs(x) to have an effect, but then we would need a different dataset since +N and -N will cancel out too easily.
Oct 18 2014
prev sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 10/18/14, 4:43 AM, John Colvin wrote:
 auto kahanSum(R)(R input)
 {
      double sum = 0.0;
      double c = 0.0;
      foreach(double el; input)
      {
          double y = el - c;
          double t = sum + y;
          c = (t - sum) - y;
          sum = t;
      }
      return sum;
 }
No need to implement it. http://dlang.org/phobos/std_algorithm.html#.sum Andrei
Oct 18 2014
next sibling parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Saturday, 18 October 2014 at 15:17:09 UTC, Andrei Alexandrescu 
wrote:
 No need to implement it. 
 http://dlang.org/phobos/std_algorithm.html#.sum
It isn't accurate. Python's fsum is around 100 lines of c-code and AFAIK based on this algorithm: http://www.cs.berkeley.edu/~jrs/papers/robustr.pdf more: https://hg.python.org/cpython/file/7c183c782401/Modules/mathmodule.c#l1063 http://www-pequan.lip6.fr/~graillat/papers/rr2005-03.pdf But I think an integer version is faster on a large dataset.
Oct 18 2014
next sibling parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
Demmel and Hida use different algorithms based on then input size:

http://www.cs.berkeley.edu/~demmel/AccurateSummation.pdf

Another overview of summation algorithms:

http://www.sigsam.org/bulletin/articles/147/sumnums.pdf
Oct 18 2014
parent "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
Zhu and Hayes (2010) «Algorithm 908: Online Exact Summation of 
Floating-Point Streams» is compatible with ranges and looks 
interesting:

http://s3.amazonaws.com/researchcompendia_prod/articles/990d22f230c4b4eb7796f0ed45b209eb-2013-12-23-01-53-27/a37-zhu.pdf
Oct 18 2014
prev sibling parent reply "monarch_dodra" <monarchdodra gmail.com> writes:
On Saturday, 18 October 2014 at 15:39:36 UTC, Ola Fosheim Grøstad 
wrote:
 On Saturday, 18 October 2014 at 15:17:09 UTC, Andrei 
 Alexandrescu wrote:
 No need to implement it. 
 http://dlang.org/phobos/std_algorithm.html#.sum
It isn't accurate.
Did you look at the doc. It's specially designed to be accurate...
Oct 18 2014
parent "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Saturday, 18 October 2014 at 16:46:31 UTC, monarch_dodra wrote:
 Did you look at the doc. It's specially designed to be 
 accurate...
Change the docs?
Oct 18 2014
prev sibling parent Walter Bright <newshound2 digitalmars.com> writes:
On 10/18/2014 8:17 AM, Andrei Alexandrescu wrote:
 On 10/18/14, 4:43 AM, John Colvin wrote:
 auto kahanSum(R)(R input)
 {
      double sum = 0.0;
      double c = 0.0;
      foreach(double el; input)
      {
          double y = el - c;
          double t = sum + y;
          c = (t - sum) - y;
          sum = t;
      }
      return sum;
 }
No need to implement it. http://dlang.org/phobos/std_algorithm.html#.sum
Wow, I hadn't noticed that before. Sweet!
Oct 18 2014
prev sibling parent reply "eles" <eles215 gzk.dot> writes:
On Saturday, 18 October 2014 at 10:44:59 UTC, Ola Fosheim Grøstad 
wrote:
 On Saturday, 18 October 2014 at 08:21:47 UTC, eles wrote:
 On Saturday, 18 October 2014 at 05:54:01 UTC, Ola Fosheim 
 Grøstad wrote:
 On Saturday, 18 October 2014 at 04:35:07 UTC, Walter Bright 
 wrote:
 Larger mantissa can help a little bit, but only a little bit.
https://en.wikipedia.org/wiki/Kahan_summation_algorithm
Kahan helps a little bit: a = [math.pi*(n**9) for n in range(0,101)] +[-math.pi*(n**9) for n in range(100,0,-1)]
This might simply be a case biased in favor of pairwise summation, as numbers ar symmetric.
Oct 18 2014
parent "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Saturday, 18 October 2014 at 12:48:00 UTC, eles wrote:
 This might simply be a case biased in favor of pairwise 
 summation, as numbers ar symmetric.
I haven't used pairwise summation though. The best solution is to partition based on exponent and use accurate integer arithmetics. The first function on the link I posted above is accurate in the partitions, but inaccurate of the summation of the partitions… :-/ But yes, you can create a lot worse datasets. Kahan only recover a few of the lost bits due to rounding.
Oct 18 2014
prev sibling parent reply "eles" <eles215 gzk.dot> writes:
On Friday, 17 October 2014 at 17:14:24 UTC, Paulo Pinto wrote:
 Am 17.10.2014 um 17:14 schrieb "Ola Fosheim 
 =?UTF-8?B?R3LDuHN0YWQi?= <ola.fosheim.grostad+dlang gmail.com>":
 On Friday, 17 October 2014 at 09:52:26 UTC, Marco Leise wrote:
 And sloooowwwweeeerrr
And it has taaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaabs! :((
Oct 17 2014
parent reply "monarch_dodra" <monarchdodra gmail.com> writes:
On Friday, 17 October 2014 at 19:11:48 UTC, eles wrote:
 On Friday, 17 October 2014 at 17:14:24 UTC, Paulo Pinto wrote:
 Am 17.10.2014 um 17:14 schrieb "Ola Fosheim 
 =?UTF-8?B?R3LDuHN0YWQi?= 
 <ola.fosheim.grostad+dlang gmail.com>":
 On Friday, 17 October 2014 at 09:52:26 UTC, Marco Leise wrote:
 And sloooowwwweeeerrr
And it has taaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaabs! :((
Tabs, in and out of themselves, aren't bad. Arguably, they are better than spaces. In a perfect world, everyone would use tabs everywhere, and each individual would set their editor to their prefered indent size (personally, I like too. It's concise) However, once you start working with people who can't be arsed to keep a consistent style, then you have to "lower your standard to the lowest common denominator". That's mostly the reason people tend to opt for spaces everywhere. I just read this though: "Python 3 disallows mixing the use of tabs and spaces for indentation." Fucking WIN. A compiler that will *refuse* to compile your code because it is too ugly? Mind BLOWN.
Oct 17 2014
parent ketmar via Digitalmars-d <digitalmars-d puremagic.com> writes:
On Fri, 17 Oct 2014 19:49:06 +0000
monarch_dodra via Digitalmars-d <digitalmars-d puremagic.com> wrote:

 I just read this though:
 "Python 3 disallows mixing the use of tabs and spaces for=20
 indentation."
 Fucking WIN. A compiler that will *refuse* to compile your code=20
 because it is too ugly? Mind BLOWN.
they're still reinventing "whitespace" language.
Oct 17 2014
prev sibling next sibling parent reply "monarch_dodra" <monarchdodra gmail.com> writes:
On Friday, 17 October 2014 at 09:17:52 UTC, ZombineDev wrote:
 I saw [this][0] proposal for adding ranges to C++'s standard
 library. The [paper][1] looks at D style ranges, but concludes:

 Since iterators can implement D ranges, but D ranges cannot be 
 used to implement iterators, we conclude that iterators form a 
 more powerful and foundational basis.
What do you guys think? [0]: https://isocpp.org/blog/2014/10/ranges [1]: https://ericniebler.github.io/std/wg21/D4128.html
One problem with C++ style iterators is composition, and their exponential growth, due to their "pair" approach. Indeed, more often than not, proper iteration *requires* the "it" *already* know where the underlying iterator ends. For example, a "stride" adapter iterator would look like this: template <typename It> struct StrideIt { It current; It end; void operator++() { ++current; if (current != end) ++current; } } Then you combine it with: StrideId<It> sit (it, itend); StrideId<It> sitend(itend, itend); for ( ; ++it ; it != itend) ... As you can see, it quickly becomes bloated and cumbersome. In particular, it takes *tons* of lines of code, traits and what not to compose. C++11's "auto" make things somewhat simpler, but it is still bloated. Another issue is that iterators model a *pointer* abstraction. Iterators *must* have "reference_t". ranges are more generic in the sense that they simply model iteration. *THAT SAID*, as convenient as ranges are, they do suffer from the "shrink but can't grow" issue. In particular, you can't really "cut" a range the way you can with iterators: "first, middle, last". If you are using RA ranges with slicing, it doesn't show too much. However, if you are using generic bidir ranges, on containers such as "DList", you really start to feel the pain. My personal feeling (IMO): - Consuming, adapting, producing data: Ranges win hands down. - Managing, shuffling or inserting elements in a container: To be honest, I prefer iterators. Given how C++'s STL is container-centric, whereas D's phobos is range centric, I can totally understand both sides' position.
Oct 17 2014
parent Steven Schveighoffer <schveiguy yahoo.com> writes:
On 10/17/14 10:13 AM, monarch_dodra wrote:

 My personal feeling (IMO):
 - Consuming, adapting, producing data: Ranges win hands down.
 - Managing, shuffling or inserting elements in a container: To be
 honest, I prefer iterators.
Dcollections solves this. -Steve
Oct 17 2014
prev sibling next sibling parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 10/17/14, 2:17 AM, ZombineDev wrote:
 I saw [this][0] proposal for adding ranges to C++'s standard
 library. The [paper][1] looks at D style ranges, but concludes:

 Since iterators can implement D ranges, but D ranges cannot be used to
 implement iterators, we conclude that iterators form a more powerful
 and foundational basis.
What do you guys think? [0]: https://isocpp.org/blog/2014/10/ranges [1]: https://ericniebler.github.io/std/wg21/D4128.html
Yeah, it's true and somewhat self-evident in the sense that structure build upward (more structure from less). In the same vein pointers are more powerful than slices and untyped data more powerful than typed data. Andrei
Oct 17 2014
prev sibling parent reply "Sean Kelly" <sean invisibleduck.org> writes:
On Friday, 17 October 2014 at 09:17:52 UTC, ZombineDev wrote:
 I saw [this][0] proposal for adding ranges to C++'s standard
 library. The [paper][1] looks at D style ranges, but concludes:

 Since iterators can implement D ranges, but D ranges cannot be 
 used to implement iterators, we conclude that iterators form a 
 more powerful and foundational basis.
What do you guys think?
"Since assembly code can be used to implement Java but java cannot be used to implement assembly code, we conclude that assembly code forms a more powerful and foundational basis." I probably could have chosen that example better, but I hope it gets my point across. It's always possible to reduce a library interface to something that's even more stripped-down, which is by definition more powerful and foundational. But if the user never wants to work at that low level of abstraction then you're just imposing an unnecessary burden upon them. A well-designed library provides power and flexibility in a form that encourages a good coding style and which inherently reduces the chance of mistakes. Getting this right is really, really hard to do. I think D gets it more right than C++ though, across the board. Regarding iterators vs. ranges, there are still places where ranges struggle to meet the facility of iterators. But I don't think this is a flaw in the concept so much that it's still fairly new and people are still working out the details. Given the standardization process for C++, they're probably right to remain with iterators for now and wait for all the kinks to be worked out of range design. Maybe get ranges into Boost (if they aren't there already) and see how it goes. But dismissing ranges out of hand for not being sufficiently "powerful and foundational" is just silly.
Oct 18 2014
next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 10/18/2014 9:16 AM, Sean Kelly wrote:
  But dismissing ranges out of hand for not being sufficiently
 "powerful and foundational" is just silly.
I agree. It's like "foreach" in D. It's less powerful and foundational than a "for" loop (in fact, the compiler internally rewrites foreach into for), but that takes nothing away from how darned useful (and far less bug prone) foreach is. Also, the glue layer rewrites "for" loops into "goto" loops, as gotos are more powerful and foundational :-)
Oct 18 2014
next sibling parent reply "monarch_dodra" <monarchdodra gmail.com> writes:
On Saturday, 18 October 2014 at 17:31:18 UTC, Walter Bright wrote:
 I agree. It's like "foreach" in D. It's less powerful and 
 foundational than a "for" loop (in fact, the compiler 
 internally rewrites foreach into for), but that takes nothing 
 away from how darned useful (and far less bug prone) foreach is.
Actually, there are quite a few bugs related to modifying values and/or indexes in a foreach loop. In particular, foreach allows "ref" iteration over ranges that don't give ref access...
Oct 18 2014
parent Walter Bright <newshound2 digitalmars.com> writes:
On 10/18/2014 10:37 AM, monarch_dodra wrote:
 On Saturday, 18 October 2014 at 17:31:18 UTC, Walter Bright wrote:
 I agree. It's like "foreach" in D. It's less powerful and foundational than a
 "for" loop (in fact, the compiler internally rewrites foreach into for), but
 that takes nothing away from how darned useful (and far less bug prone)
 foreach is.
Actually, there are quite a few bugs related to modifying values and/or indexes in a foreach loop. In particular, foreach allows "ref" iteration over ranges that don't give ref access...
All bugs should be reported to bugzilla. I still argue that foreach is far less bug prone than for loops.
Oct 18 2014
prev sibling parent reply Timon Gehr <timon.gehr gmx.ch> writes:
On 10/18/2014 07:30 PM, Walter Bright wrote:
 Also, the glue layer rewrites "for" loops into "goto" loops, as gotos
 are more powerful and foundational :-)
Well, not really. It rewrites "for" loops into "goto" loops because that's the only control flow construct implemented by the hardware.
Oct 20 2014
parent reply "Daniel Murphy" <yebbliesnospam gmail.com> writes:
"Timon Gehr"  wrote in message news:m23eqa$2qfq$1 digitalmars.com... 

 On 10/18/2014 07:30 PM, Walter Bright wrote:
 Also, the glue layer rewrites "for" loops into "goto" loops, as gotos
 are more powerful and foundational :-)
Well, not really. It rewrites "for" loops into "goto" loops because that's the only control flow construct implemented by the hardware.
You might be thinking of the code generator, not the glue layer.
Oct 20 2014
parent reply Timon Gehr <timon.gehr gmx.ch> writes:
On 10/20/2014 07:21 PM, Daniel Murphy wrote:
 "Timon Gehr"  wrote in message news:m23eqa$2qfq$1 digitalmars.com...
 On 10/18/2014 07:30 PM, Walter Bright wrote:
 Also, the glue layer rewrites "for" loops into "goto" loops, as gotos
 are more powerful and foundational :-)
Well, not really. It rewrites "for" loops into "goto" loops because that's the only control flow construct implemented by the hardware.
You might be thinking of the code generator, not the glue layer.
Why? Does the code generator support for loops?
Oct 20 2014
parent reply "Daniel Murphy" <yebbliesnospam gmail.com> writes:
"Timon Gehr"  wrote in message news:m23ng5$4v2$1 digitalmars.com...

 Well, not really. It rewrites "for" loops into "goto" loops because
 that's the only control flow construct implemented by the hardware.
 You might be thinking of the code generator, not the glue layer.
Why? Does the code generator support for loops?
The code cares what the hardware implements, the glue layer does not. The choice of "goto" loops aka basic blocks as the representation in the backend is due to ease of analysis, not hardware constraints.
Oct 21 2014
parent "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Tuesday, 21 October 2014 at 11:19:26 UTC, Daniel Murphy wrote:
 does not.  The choice of "goto" loops aka basic blocks as the 
 representation in the backend is due to ease of analysis, not 
 hardware constraints.
FWIW there are equivalent IR representations that are based on continuation passing (http://en.wikipedia.org/wiki/Static_single_assignment_form): «SSA is formally equivalent to a well-behaved subset of CPS (excluding non-local control flow, which does not occur when CPS is used as intermediate representation), so optimizations and transformations formulated in terms of one immediately apply to the other.»
Oct 21 2014
prev sibling next sibling parent "Sean Kelly" <sean invisibleduck.org> writes:
All that said, after a quick scan I really like the range 
proposal for C++. In particular, the idea of positions is a nice 
one, as it addresses an awkward issue with D ranges.
Oct 18 2014
prev sibling parent reply "bachmeier" <no spam.net> writes:
On Saturday, 18 October 2014 at 16:16:17 UTC, Sean Kelly wrote:
 gets my point across.  It's always possible to reduce a library 
 interface to something that's even more stripped-down, which is 
 by definition more powerful and foundational.  But if the user 
 never wants to work at that low level of abstraction then 
 you're just imposing an unnecessary burden upon them.
Anything that you can do with an iterator can be done with a tail recursive function, but not vice versa, which means iterators serve no purpose.
Oct 19 2014
parent "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Sunday, 19 October 2014 at 11:56:10 UTC, bachmeier wrote:
 Anything that you can do with an iterator can be done with a 
 tail recursive function, but not vice versa, which means 
 iterators serve no purpose.
I guess that comment was ironic, but end-begin kind of disproves it :-). I kind of like the concept of "cursors" in databases, where you can pin-point a position in a query result and continue even after the table has changed. For a linked list it is is easy to get "cursor semantics". Not so easy for arrays, but it should be possible to define "cursor enabled containers" with appropriate locking.
Oct 19 2014