## digitalmars.D.announce - avgtime - Small D util for your everyday benchmarking needs

- "Juan Manuel Cabo" <juanmanuel.cabo gmail.com> Mar 21 2012
- "Tove" <tove fransson.se> Mar 21 2012
- "Juan Manuel Cabo" <juanmanuel.cabo gmail.com> Mar 21 2012
- "Nick Sabalausky" <a a.a> Mar 21 2012
- Manfred Nowak <svv1999 hotmail.com> Mar 22 2012
- Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> Mar 22 2012
- Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> Mar 22 2012
- Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> Mar 23 2012
- Manfred Nowak <svv1999 hotmail.com> Mar 23 2012
- Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> Mar 24 2012
- Manfred Nowak <svv1999 hotmail.com> Mar 25 2012
- "Nick Sabalausky" <a a.a> Mar 22 2012
- Manfred Nowak <svv1999 hotmail.com> Mar 22 2012
- Don Clugston <dac nospam.com> Mar 23 2012
- Don Clugston <dac nospam.com> Mar 23 2012
- Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> Mar 23 2012
- Don Clugston <dac nospam.com> Mar 27 2012
- "Nick Sabalausky" <a a.a> Mar 23 2012
- Ary Manzana <ary esperanto.org.ar> Mar 26 2012
- "Juan Manuel Cabo" <juanmanuel.cabo gmail.com> Mar 22 2012
- James Miller <james aatch.net> Mar 22 2012
- "Juan Manuel Cabo" <juanmanuel.cabo gmail.com> Mar 23 2012
- "Juan Manuel Cabo" <juanmanuel.cabo gmail.com> Mar 23 2012
- "Juan Manuel Cabo" <juanmanuel.cabo gmail.com> Mar 23 2012
- "Juan Manuel Cabo" <juanmanuel.cabo gmail.com> Mar 23 2012
- James Miller <james aatch.net> Mar 23 2012
- "Juan Manuel Cabo" <juanmanuel.cabo gmail.com> Mar 23 2012
- "Juan Manuel Cabo" <juanmanuel.cabo gmail.com> Mar 23 2012
- "Juan Manuel Cabo" <juanmanuel.cabo gmail.com> Mar 23 2012
- Marco Leise <Marco.Leise gmx.de> Mar 24 2012
- "Juan Manuel Cabo" <juanmanuel.cabo gmail.com> Mar 26 2012
- "Ary Manzana" <ary esperanto.org.ar> Mar 26 2012
- "Juan Manuel Cabo" <juanmanuel.cabo gmail.com> Mar 26 2012
- "Juan Manuel Cabo" <juanmanuel.cabo gmail.com> Mar 26 2012
- Brad Anderson <eco gnuk.net> Mar 29 2012

This is a small util I wrote in D which is like the unix 'time' command but can repeat the command N times and show median, average, standard deviation, minimum and maximum. As you all know, it is not proper to conclude that a program is faster than another program by running them just once. It's BOOST and is in github: https://github.com/jmcabo/avgtime Example: avgtime -r 10 -q ls -lR /etc ------------------------ Total time (ms): 933.742 Repetitions : 10 Median time : 90.505 Avg time : 93.3742 Std dev. : 4.66808 Minimum : 88.732 Maximum : 101.225 The -q argument pipes stderr and stdout of the program under test to /dev/null I put more info in the github page. HAVE FUN!! --jm

Mar 21 2012

On Thursday, 22 March 2012 at 00:32:31 UTC, Juan Manuel Cabo wrote:This is a small util I wrote in D which is like the unix 'time' command but can repeat the command N times and show median, average, standard deviation, minimum and maximum. As you all know, it is not proper to conclude that a program is faster than another program by running them just once. It's BOOST and is in github: https://github.com/jmcabo/avgtime Example: avgtime -r 10 -q ls -lR /etc ------------------------ Total time (ms): 933.742 Repetitions : 10 Median time : 90.505 Avg time : 93.3742 Std dev. : 4.66808 Minimum : 88.732 Maximum : 101.225 The -q argument pipes stderr and stdout of the program under test to /dev/null I put more info in the github page. HAVE FUN!! --jm

Awesome, I do have a tiny feature request for the next version... a commandline switch to enable automatically discarding the first run as an outlier. /Tove

Mar 21 2012

On Thursday, 22 March 2012 at 01:37:19 UTC, Tove wrote:Awesome, I do have a tiny feature request for the next version... a commandline switch to enable automatically discarding the first run as an outlier. /Tove

Done, I just put it in github. (-d switch). But maybe you should be looking at the median to ignore outliers. I also added a -p switch to print all the times: ./avgtime -d -q -p -r10 ls -lR /usr/share/doc ------------------------ Total time (ms): 3986.69 Repetitions : 10 Median time : 397.62 Avg time : 398.669 Std dev. : 2.95832 Minimum : 395.633 Maximum : 406.274 Sorted times : [395.633, 396.261, 396.273, 397.413, 397.425, 397.815, 399.321, 399.719, 400.551, 406.274] --jm

Mar 21 2012

"Juan Manuel Cabo" <juanmanuel.cabo gmail.com> wrote in message news:zgjczrnyknqsiylhntui forum.dlang.org...This is a small util I wrote in D which is like the unix 'time' command but can repeat the command N times and show median, average, standard deviation, minimum and maximum. As you all know, it is not proper to conclude that a program is faster than another program by running them just once. It's BOOST and is in github: https://github.com/jmcabo/avgtime Example: avgtime -r 10 -q ls -lR /etc ------------------------ Total time (ms): 933.742 Repetitions : 10 Median time : 90.505 Avg time : 93.3742 Std dev. : 4.66808 Minimum : 88.732 Maximum : 101.225 The -q argument pipes stderr and stdout of the program under test to /dev/null I put more info in the github page. HAVE FUN!!

Oooh, that sounds fantastic!

Mar 21 2012

Juan Manuel Cabo wrote:like the unix 'time' command

`version linux' is missing. -manfred

Mar 22 2012

On 3/21/12 7:32 PM, Juan Manuel Cabo wrote:avgtime -r 10 -q ls -lR /etc ------------------------ Total time (ms): 933.742 Repetitions : 10 Median time : 90.505 Avg time : 93.3742 Std dev. : 4.66808 Minimum : 88.732 Maximum : 101.225

Sweet! You may want to also print the mode of the distribution, which is the time of the maximum sample density. http://en.wikipedia.org/wiki/Mode_(statistics) (Warning: nontrivial but informative.) Andrei

Mar 22 2012

On 3/22/12 11:53 PM, Juan Manuel Cabo wrote:On Thursday, 22 March 2012 at 22:22:31 UTC, Andrei Alexandrescu wrote:Sweet! You may want to also print the mode of the distribution, which is the time of the maximum sample density. http://en.wikipedia.org/wiki/Mode_(statistics) (Warning: nontrivial but informative.) Andrei

Thanks for your feedback!Sweet! You may want to also print the mode of the distribution, [....]

Done!. Just pushed it to github. I made a histogram too!! (man, the gaussian curve is everywhere, it never ceases to perplex me).

I'm actually surprised. I'm working on benchmarking lately and the distributions I get are very concentrated around the minimum. Andrei

Mar 22 2012

On 3/23/12 3:02 AM, Juan Manuel Cabo wrote:On Friday, 23 March 2012 at 05:16:20 UTC, Andrei Alexandrescu wrote: [.....](man, the gaussian curve is everywhere, it never ceases to perplex me).

I'm actually surprised. I'm working on benchmarking lately and the distributions I get are very concentrated around the minimum. Andrei

Well, the shape of the curve depends a lot on how the random noise gets inside the measurement.

Hmm, well the way I see it, the observed measurements have the following composition: X = T + Q + N where T > 0 (a constant) is the "real" time taken by the processing, Q > 0 is the quantization noise caused by the limited resolution of the clock (can be considered 0 if the resolution is much smaller than the actual time), and N is noise caused by a variety of factors (other processes, throttling, interrupts, networking, memory hierarchy effects, and many more). The challenge is estimating T given a bunch of X samples. N can be probably approximated to a Gaussian, although for short timings I noticed it's more like bursts that just cause outliers. But note that N is always positive (therefore not 100% Gaussian), i.e. there's no way to insert some noise that makes the code seem artificially faster. It's all additive. Taking the mode of the distribution will estimate T + mode(N), which is informative because after all there's no way to eliminate noise. However, if the focus is improving T, we want an estimate as close to T as possible. In the limit, taking the minimum over infinitely many measurements of X would yield T. Andrei

Mar 23 2012

Andrei Alexandrescu wrote:In the limit, taking the minimum over infinitely many measurements of X would yield T.

True, if the thoretical variance of the distribution of T is close to zero. But horrible wrong, if T depends on an algorithm that is fast only under amortized analysis, because the worst case scenario will be hidden. -manfred

Mar 23 2012

On 3/23/12 5:42 PM, Manfred Nowak wrote:Andrei Alexandrescu wrote:In the limit, taking the minimum over infinitely many measurements of X would yield T.

True, if the thoretical variance of the distribution of T is close to zero. But horrible wrong, if T depends on an algorithm that is fast only under amortized analysis, because the worst case scenario will be hidden.

Wait, doesn't a benchmark always measure an algorithm with the same input? For collecting a chart of various inputs, there should be various benchmarks. Andrei

Mar 24 2012

Andrei Alexandrescu wrote:Wait, doesn't a benchmark always measure an algorithm with the same input?

The fact that you formulate as a question indicates that you are unsure about the wright answer---me too, but 1) surely one can define a benchmark to have this property. But if one uses this definition, the used input would belong to the benchmark as a description. I have never seen a description of a benchmark including the input, but because I am more interested in theory I may have simply missed such descriptions. 2) if a probilistic algorithms is used, the meaning of input becomes unclear, because the state of the machine influences T. 3) if a heuristic is used by the benchmarked algorithm, then a made up family of benchmarks can "prove" T= O(n*n) for quick sort. -manfred

Mar 25 2012

"Juan Manuel Cabo" <juanmanuel.cabo gmail.com> wrote in message news:mytcmgglyntqsoybjcfz forum.dlang.org...On Thursday, 22 March 2012 at 22:22:31 UTC, Andrei Alexandrescu wrote:Sweet! You may want to also print the mode of the distribution, which is the time of the maximum sample density. http://en.wikipedia.org/wiki/Mode_(statistics) (Warning: nontrivial but informative.) Andrei

Thanks for your feedback!Sweet! You may want to also print the mode of the distribution, [....]

Done!. Just pushed it to github. I made a histogram too!! (man, the gaussian curve is everywhere, it never ceases to perplex me). The histogram bins are the most significant digits (three "automatic" levels of precision, with rounding and casting tricks). But I think the most important change is that I'm now showing the 95% and 99% confidence intervals. (For the confidence intervals to mean anything, please everyone, remember to control your variables (don't defrag and benchmark :-) !!) so that apples are still apples and don't become oranges, and make sure N>30). More info on histogram and confidence intervals in the usage help. avgtime -q -h -r400 ls /etc ------------------------ Total time (ms): 2751.96 Repetitions : 400 Sample mode : 6.9 (79 ocurrences) Median time : 6.945 Avg time : 6.8799 Std dev. : 0.93927 Minimum : 3.7 Maximum : 16.36 95% conf.int. : [6.78786, 6.97195] e = 0.0920468 99% conf.int. : [6.75893, 7.00087] e = 0.12097 Histogram : msecs: count normalized bar 3.7: 2 # 3.8: 4 ## 3.9: 1 4.0: 1 4.2: 4 ## 4.3: 1 4.4: 1 4.5: 2 # 4.6: 3 # 4.7: 2 # 4.8: 3 # 4.9: 3 # 5.2: 1 5.3: 2 # 6.1: 1 6.2: 1 6.3: 4 ## 6.4: 6 ### 6.5: 14 ####### 6.6: 21 ########## 6.7: 31 ############### 6.8: 50 ######################### 6.9: 79 ######################################## 7.0: 48 ######################## 7.1: 29 ############## 7.2: 22 ########### 7.3: 13 ###### 7.4: 8 #### 7.5: 7 ### 7.6: 12 ###### 7.7: 6 ### 7.8: 6 ### 7.9: 2 # 8.0: 3 # 8.1: 1 8.2: 1 8.7: 1 8.8: 1 9.1: 1 11.5: 1 16.3: 1

Wow, that's just fantastic! Really, this should be a standard system tool. I think this guy would be proud: http://zedshaw.com/essays/programmer_stats.html

Mar 22 2012

Andrei Alexandrescu wrote:You may want to also print the mode of the distribution, nontrivial but informative

In case of this implementation and according to the given link: trivial and noninformative, because | For samples, if it is known that they are drawn from a symmetric | distribution, the sample mean can be used as an estimate of the | population mode. and the program computes the variance as if the values of the sample follow a normal distribution, which is symmetric. Therefore the mode of the sample is of interest only, when the variance is calculated wrongly. -manfred

Mar 22 2012

On 23/03/12 09:37, Juan Manuel Cabo wrote:On Friday, 23 March 2012 at 05:51:40 UTC, Manfred Nowak wrote:| For samples, if it is known that they are drawn from a symmetric | distribution, the sample mean can be used as an estimate of the | population mode.

I'm not printing the population mode, I'm printing the 'sample mode'. It has a very clear meaning: most frequent value. To have frequency, I group into 'bins' by precision: 12.345 and 12.3111 will both go to the 12.3 bin.and the program computes the variance as if the values of the sample follow a normal distribution, which is symmetric.

This program doesn't compute the variance. Maybe you are talking about another program. This program computes the standard deviation of the sample. The sample doesn't need to of any distribution to have a standard deviation. It is not a distribution parameter, it is a statistic.Therefore the mode of the sample is of interest only, when the variance is calculated wrongly.

??? The 'sample mode', 'median' and 'average' can quickly tell you something about the shape of the histogram, without looking at it. If the three coincide, then maybe you are in normal distribution land. The only place where I assume normal distribution is for the confidence intervals. And it's in the usage help. If you want to support estimating weird probability distributions parameters, forking and pull requests are welcome. Rewrites too. Good luck detecting distribution shapes!!!! ;-)-manfred

PS: I should use the t student to make the confidence intervals, and for computing that I should use the sample standard deviation (/n-1), but that is a completely different story. The z normal with n>30 aproximation is quite good. (I would have to embed a table for the t student tail factors, pull reqs velcome).

No, it's easy. Student t is in std.mathspecial.PS2: I now fixed the confusion with the confidence interval of the variable and the confidence interval of the mu average, I simply now show both. (release 0.4). PS3: Statistics estimate distribution parameters. --jm

Mar 23 2012

On 23/03/12 11:20, Don Clugston wrote:On 23/03/12 09:37, Juan Manuel Cabo wrote:On Friday, 23 March 2012 at 05:51:40 UTC, Manfred Nowak wrote:| For samples, if it is known that they are drawn from a symmetric | distribution, the sample mean can be used as an estimate of the | population mode.

I'm not printing the population mode, I'm printing the 'sample mode'. It has a very clear meaning: most frequent value. To have frequency, I group into 'bins' by precision: 12.345 and 12.3111 will both go to the 12.3 bin.and the program computes the variance as if the values of the sample follow a normal distribution, which is symmetric.

This program doesn't compute the variance. Maybe you are talking about another program. This program computes the standard deviation of the sample. The sample doesn't need to of any distribution to have a standard deviation. It is not a distribution parameter, it is a statistic.Therefore the mode of the sample is of interest only, when the variance is calculated wrongly.

??? The 'sample mode', 'median' and 'average' can quickly tell you something about the shape of the histogram, without looking at it. If the three coincide, then maybe you are in normal distribution land. The only place where I assume normal distribution is for the confidence intervals. And it's in the usage help. If you want to support estimating weird probability distributions parameters, forking and pull requests are welcome. Rewrites too. Good luck detecting distribution shapes!!!! ;-)-manfred

PS: I should use the t student to make the confidence intervals, and for computing that I should use the sample standard deviation (/n-1), but that is a completely different story. The z normal with n>30 aproximation is quite good. (I would have to embed a table for the t student tail factors, pull reqs velcome).

No, it's easy. Student t is in std.mathspecial.

Aargh, I didn't get around to copying it in. But this should do it. /** Inverse of Student's t distribution * * Given probability p and degrees of freedom nu, * finds the argument t such that the one-sided * studentsDistribution(nu,t) is equal to p. * * Params: * nu = degrees of freedom. Must be >1 * p = probability. 0 < p < 1 */ real studentsTDistributionInv(int nu, real p ) in { assert(nu>0); assert(p>=0.0L && p<=1.0L); } body { if (p==0) return -real.infinity; if (p==1) return real.infinity; real rk, z; rk = nu; if ( p > 0.25L && p < 0.75L ) { if ( p == 0.5L ) return 0; z = 1.0L - 2.0L * p; z = betaIncompleteInv( 0.5L, 0.5L*rk, fabs(z) ); real t = sqrt( rk*z/(1.0L-z) ); if( p < 0.5L ) t = -t; return t; } int rflg = -1; // sign of the result if (p >= 0.5L) { p = 1.0L - p; rflg = 1; } z = betaIncompleteInv( 0.5L*rk, 0.5L, 2.0L*p ); if (z<0) return rflg * real.infinity; return rflg * sqrt( rk/z - rk ); }

Mar 23 2012

On 3/23/12 5:51 AM, Don Clugston wrote:No, it's easy. Student t is in std.mathspecial.

Aargh, I didn't get around to copying it in. But this should do it.

Shouldn't put this stuff in std.numeric, or create a std.stat module? I think also some functions for t-test would be useful. Andrei

Mar 23 2012

On 3/23/12 12:51 AM, Manfred Nowak wrote:Andrei Alexandrescu wrote:You may want to also print the mode of the distribution, nontrivial but informative

In case of this implementation and according to the given link: trivial and noninformative, because | For samples, if it is known that they are drawn from a symmetric | distribution, the sample mean can be used as an estimate of the | population mode. and the program computes the variance as if the values of the sample follow a normal distribution, which is symmetric. Therefore the mode of the sample is of interest only, when the variance is calculated wrongly.

Again, benchmarks I've seen are always asymmetric. Not sure why those shown here are symmetric. The mode should be very close to the minimum (and in fact I think taking the minimum is a pretty good approximation of the sought-after time). Andrei

Mar 23 2012

On 23/03/12 16:25, Andrei Alexandrescu wrote:On 3/23/12 12:51 AM, Manfred Nowak wrote:Andrei Alexandrescu wrote:You may want to also print the mode of the distribution, nontrivial but informative

In case of this implementation and according to the given link: trivial and noninformative, because | For samples, if it is known that they are drawn from a symmetric | distribution, the sample mean can be used as an estimate of the | population mode. and the program computes the variance as if the values of the sample follow a normal distribution, which is symmetric. Therefore the mode of the sample is of interest only, when the variance is calculated wrongly.

Again, benchmarks I've seen are always asymmetric. Not sure why those shown here are symmetric. The mode should be very close to the minimum (and in fact I think taking the minimum is a pretty good approximation of the sought-after time). Andrei

Agreed, I think situations where you would get a normal distribution are rare in benchmarking code. Small sections of code always have a best-case scenario, where there are no cache misses. If there are task switches, the best case is zero task switches. If you use the CPU performance counters, you can identify the *cause* of performance variations. When I've done this, I've always been able to get very stable numbers

Mar 27 2012

"Juan Manuel Cabo" <juanmanuel.cabo gmail.com> wrote in message news:bqrlhcggehbrzyuhzjuy forum.dlang.org...On Friday, 23 March 2012 at 06:51:48 UTC, James Miller wrote:Dude, this is awesome. I tend to just use time, but if I was doing anything more complicated, I'd use this. I would suggest changing the name while you still can. avgtime is not that informative a name given that it now does more than just "Average" times. -- James Miller

Dude, this is awesome.

Thanks!! I appreciate your feedback!I would suggest changing the name while you still can.

Suggestions welcome!!

"timestats"?

Mar 23 2012

On 3/23/12 4:11 PM, Juan Manuel Cabo wrote:On Friday, 23 March 2012 at 06:51:48 UTC, James Miller wrote:Dude, this is awesome. I tend to just use time, but if I was doing anything more complicated, I'd use this. I would suggest changing the name while you still can. avgtime is not that informative a name given that it now does more than just "Average" times. -- James Miller

Dude, this is awesome.

Thanks!! I appreciate your feedback!I would suggest changing the name while you still can.

Suggestions welcome!! --jm

give_me_d_average

Mar 26 2012

On Thursday, 22 March 2012 at 22:22:31 UTC, Andrei Alexandrescu wrote:Sweet! You may want to also print the mode of the distribution, which is the time of the maximum sample density. http://en.wikipedia.org/wiki/Mode_(statistics) (Warning: nontrivial but informative.) Andrei

Thanks for your feedback!Sweet! You may want to also print the mode of the distribution, [....]

Done!. Just pushed it to github. I made a histogram too!! (man, the gaussian curve is everywhere, it never ceases to perplex me). The histogram bins are the most significant digits (three "automatic" levels of precision, with rounding and casting tricks). But I think the most important change is that I'm now showing the 95% and 99% confidence intervals. (For the confidence intervals to mean anything, please everyone, remember to control your variables (don't defrag and benchmark :-) !!) so that apples are still apples and don't become oranges, and make sure N>30). More info on histogram and confidence intervals in the usage help. avgtime -q -h -r400 ls /etc ------------------------ Total time (ms): 2751.96 Repetitions : 400 Sample mode : 6.9 (79 ocurrences) Median time : 6.945 Avg time : 6.8799 Std dev. : 0.93927 Minimum : 3.7 Maximum : 16.36 95% conf.int. : [6.78786, 6.97195] e = 0.0920468 99% conf.int. : [6.75893, 7.00087] e = 0.12097 Histogram : msecs: count normalized bar 3.7: 2 # 3.8: 4 ## 3.9: 1 4.0: 1 4.2: 4 ## 4.3: 1 4.4: 1 4.5: 2 # 4.6: 3 # 4.7: 2 # 4.8: 3 # 4.9: 3 # 5.2: 1 5.3: 2 # 6.1: 1 6.2: 1 6.3: 4 ## 6.4: 6 ### 6.5: 14 ####### 6.6: 21 ########## 6.7: 31 ############### 6.8: 50 ######################### 6.9: 79 ######################################## 7.0: 48 ######################## 7.1: 29 ############## 7.2: 22 ########### 7.3: 13 ###### 7.4: 8 #### 7.5: 7 ### 7.6: 12 ###### 7.7: 6 ### 7.8: 6 ### 7.9: 2 # 8.0: 3 # 8.1: 1 8.2: 1 8.7: 1 8.8: 1 9.1: 1 11.5: 1 16.3: 1 --jm

Mar 22 2012

On 23 March 2012 17:53, Juan Manuel Cabo <juanmanuel.cabo gmail.com> wrote:But I think the most important change is that I'm now showing the 95% and 99% confidence intervals. (For the confidence intervals to mean anything, please everyone, remember to control your variables (don't defrag and benchmark :-) !!) so that apples are still apples and don't become oranges, and make sure N>30). More info on histogram and confidence intervals in the usage help.

Dude, this is awesome. I tend to just use time, but if I was doing anything more complicated, I'd use this. I would suggest changing the name while you still can. avgtime is not that informative a name given that it now does more than just "Average" times. -- James Miller

Mar 22 2012

On Friday, 23 March 2012 at 05:16:20 UTC, Andrei Alexandrescu wrote: [.....](man, the gaussian curve is everywhere, it never ceases to perplex me).

I'm actually surprised. I'm working on benchmarking lately and the distributions I get are very concentrated around the minimum. Andrei

Well, the shape of the curve depends a lot on how the random noise gets inside the measurement. I like 'ls -lR' because the randomness comes from everywhere, and its quite bell shaped. I guess there is a lot of I/O mess (even if I/O is all cached, there are lots of opportunities for kernel mutexes to mess everything I guess). When testing "/bin/sleep 0.5", it will be quite a pretty boring histogram. And I guess than when testing something thats only CPU bound and doesn't make too much syscalls, the shape is more concentrated in a few values. On the other hand, I'm getting some weird bimodal (two peaks) curves sometimes, like the one I put on the README.md. It's definitely because of my laptop's CPU throttling, because it went away when I disabled it (for the curious ones, in ubuntu 64bit, here is a way to disable throttling (WARNING: might get hot until you undo or reboot): echo 1600000 > /sys/devices/system/cpu/cpu0/cpufreq/scaling_min_freq echo 1600000 > /sys/devices/system/cpu/cpu1/cpufreq/scaling_min_freq (yes my cpu is 1.6GHz, but it rocks). --jm

Mar 23 2012

On Thursday, 22 March 2012 at 17:13:58 UTC, Manfred Nowak wrote:Juan Manuel Cabo wrote:like the unix 'time' command

`version linux' is missing. -manfred

Linux only for now. Will make it work in windows this weekend. I hope that's what you meant. --jm

Mar 23 2012

On Friday, 23 March 2012 at 06:51:48 UTC, James Miller wrote:Dude, this is awesome. I tend to just use time, but if I was doing anything more complicated, I'd use this. I would suggest changing the name while you still can. avgtime is not that informative a name given that it now does more than just "Average" times. -- James Miller

Dude, this is awesome.

Thanks!! I appreciate your feedback!I would suggest changing the name while you still can.

Suggestions welcome!! --jm

Mar 23 2012

On Friday, 23 March 2012 at 05:51:40 UTC, Manfred Nowak wrote:| For samples, if it is known that they are drawn from a symmetric | distribution, the sample mean can be used as an estimate of the | population mode.

I'm not printing the population mode, I'm printing the 'sample mode'. It has a very clear meaning: most frequent value. To have frequency, I group into 'bins' by precision: 12.345 and 12.3111 will both go to the 12.3 bin.and the program computes the variance as if the values of the sample follow a normal distribution, which is symmetric.

This program doesn't compute the variance. Maybe you are talking about another program. This program computes the standard deviation of the sample. The sample doesn't need to of any distribution to have a standard deviation. It is not a distribution parameter, it is a statistic.Therefore the mode of the sample is of interest only, when the variance is calculated wrongly.

??? The 'sample mode', 'median' and 'average' can quickly tell you something about the shape of the histogram, without looking at it. If the three coincide, then maybe you are in normal distribution land. The only place where I assume normal distribution is for the confidence intervals. And it's in the usage help. If you want to support estimating weird probability distributions parameters, forking and pull requests are welcome. Rewrites too. Good luck detecting distribution shapes!!!! ;-)-manfred

PS: I should use the t student to make the confidence intervals, and for computing that I should use the sample standard deviation (/n-1), but that is a completely different story. The z normal with n>30 aproximation is quite good. (I would have to embed a table for the t student tail factors, pull reqs velcome). PS2: I now fixed the confusion with the confidence interval of the variable and the confidence interval of the mu average, I simply now show both. (release 0.4). PS3: Statistics estimate distribution parameters. --jm

Mar 23 2012

On 23 March 2012 21:37, Juan Manuel Cabo <juanmanuel.cabo gmail.com> wrote:PS: I should use the t student to make the confidence intervals, and for computing that I should use the sample standard deviation (/n-1), but that is a completely different story. The z normal with n>30 aproximation is quite good. (I would have to embed a table for the t student tail factors, pull reqs velcome).

If its possible to calculate it, then you can generate a table at compile-time using CTFE. Less error-prone, and controllable accuracy. -- James Miller

Mar 23 2012

On Friday, 23 March 2012 at 15:33:18 UTC, Andrei Alexandrescu wrote:On 3/23/12 3:02 AM, Juan Manuel Cabo wrote:On Friday, 23 March 2012 at 05:16:20 UTC, Andrei Alexandrescu wrote: [.....](man, the gaussian curve is everywhere, it never ceases to perplex me).

I'm actually surprised. I'm working on benchmarking lately and the distributions I get are very concentrated around the minimum. Andrei

Well, the shape of the curve depends a lot on how the random noise gets inside the measurement.

Hmm, well the way I see it, the observed measurements have the following composition: X = T + Q + N where T > 0 (a constant) is the "real" time taken by the processing, Q > 0 is the quantization noise caused by the limited resolution of the clock (can be considered 0 if the resolution is much smaller than the actual time), and N is noise caused by a variety of factors (other processes, throttling, interrupts, networking, memory hierarchy effects, and many more). The challenge is estimating T given a bunch of X samples. N can be probably approximated to a Gaussian, although for short timings I noticed it's more like bursts that just cause outliers. But note that N is always positive (therefore not 100% Gaussian), i.e. there's no way to insert some noise that makes the code seem artificially faster. It's all additive. Taking the mode of the distribution will estimate T + mode(N), which is informative because after all there's no way to eliminate noise. However, if the focus is improving T, we want an estimate as close to T as possible. In the limit, taking the minimum over infinitely many measurements of X would yield T. Andrei

In general, I agree with your reasoning. And I appreciate you taking the time to put it so eloquently!! But I think that your considering T as a constant, and preferring the minimum misses something. This might work very well for benchmarking mostly CPU bound processes, but all those other things that you consider noise (disk I/O, network, memory hierarchy, etc.) are part of the elements that make an algorithm or program faster than other, and I would consider them inside T for some applications. Consider the case depicted in this wonderful (ranty) article that was posted elsewhere in this thread: http://zedshaw.com/essays/programmer_stats.html In a part of the article, the guy talks about a system that worked fast most of the time, but would halt for a good 1 or 2 minutes sometimes. The minimum time for such a system might be a few ms, but the standard deviation would be big. This properly shifts the average time away from the minimum. If programA does the same task than programB with less I/O, or with better memory layout, etc. its average will be better, and maybe its timings won't be so spread out. But the minimum will be the same. So, in the end, I'm just happy that I could share this little avgtime with you all, and as usual there is no one-answer fits all. For some applications, the minimum will be enough. For others, it's esential to look at how spread the sample is. On the symmetry/asymmetry of the distribution topic: I realize as you said that T never gets faster than a certain point. But, depending on the nature of the program under test, the good utilization of disk I/O, network, memory, motherboard buses, etc. is what you want inside the test too, and those come with gaussian like noises which might dominate over T or not. A program that avoids that other big noise is a better program (all else the same), so I would tend to consider the whole. Thanks for the eloquency/insightfulness in your post! I'll consider adding chi-squared confidence intervals in the future. (and open to more info or if another distribution might be better). --jm

Mar 23 2012

On Friday, 23 March 2012 at 10:51:37 UTC, Don Clugston wrote:No, it's easy. Student t is in std.mathspecial.

Aargh, I didn't get around to copying it in. But this should do it. /** Inverse of Student's t distribution * [.....]

Great!!! Thank you soo much Don!!! --jm

Mar 23 2012

On Friday, 23 March 2012 at 05:26:54 UTC, Nick Sabalausky wrote:>>Wow, that's just fantastic! Really, this should be a standard system tool. I think this guy would be proud: http://zedshaw.com/essays/programmer_stats.html

Thanks for the good vibes!!!!! Hahahhah, that article is so ffffing hillarious! I love the maddox tone. --jm

Mar 23 2012

Am Fri, 23 Mar 2012 09:02:01 +0100 schrieb "Juan Manuel Cabo" <juanmanuel.cabo gmail.com>:On Friday, 23 March 2012 at 05:16:20 UTC, Andrei Alexandrescu wrote: [.....](man, the gaussian curve is everywhere, it never ceases to perplex me).

I'm actually surprised. I'm working on benchmarking lately and the distributions I get are very concentrated around the minimum. Andrei

Well, the shape of the curve depends a lot on how the random noise gets inside the measurement. I like 'ls -lR' because the randomness comes from everywhere, and its quite bell shaped. I guess there is a lot of I/O mess (even if I/O is all cached, there are lots of opportunities for kernel mutexes to mess everything I guess). When testing "/bin/sleep 0.5", it will be quite a pretty boring histogram. And I guess than when testing something thats only CPU bound and doesn't make too much syscalls, the shape is more concentrated in a few values. On the other hand, I'm getting some weird bimodal (two peaks) curves sometimes, like the one I put on the README.md. It's definitely because of my laptop's CPU throttling, because it went away when I disabled it (for the curious ones, in ubuntu 64bit, here is a way to disable throttling (WARNING: might get hot until you undo or reboot): echo 1600000 > /sys/devices/system/cpu/cpu0/cpufreq/scaling_min_freq echo 1600000 > /sys/devices/system/cpu/cpu1/cpufreq/scaling_min_freq (yes my cpu is 1.6GHz, but it rocks). --jm

On Gnome I use the cpufreq-applet to change the the frequency governor from 'ondemand' to 'performance'. That's better than manually setting a minimum frequency. (Alternatively you can set it through the /sys interface.) - Unless this governor is not compiled into the kernel. -- Marco

Mar 24 2012

On Tuesday, 27 March 2012 at 00:58:26 UTC, Ary Manzana wrote:On 3/23/12 4:11 PM, Juan Manuel Cabo wrote:On Friday, 23 March 2012 at 06:51:48 UTC, James Miller wrote:

Dude, this is awesome.

Thanks!! I appreciate your feedback!I would suggest changing the name while you still can.

Suggestions welcome!! --jm

give_me_d_average

Hahahah, naahh, prefiero avgtime o timestats, porque times<tab> autocompletaría a timestats. Qué hacés tanto tiempo? Gracias por mencionarme D hace años. Me quedó en la cabeza, y el año pasado cuando empecé un laburo nuevo tuve oportunidad de meterme con D. Saludos Ary, espero que andes bien!! --jm

Mar 26 2012

On Tuesday, 27 March 2012 at 01:19:22 UTC, Juan Manuel Cabo wrote:On Tuesday, 27 March 2012 at 00:58:26 UTC, Ary Manzana wrote:On 3/23/12 4:11 PM, Juan Manuel Cabo wrote:

Dude, this is awesome.

Thanks!! I appreciate your feedback!I would suggest changing the name while you still can.

Suggestions welcome!! --jm

give_me_d_average

Hahahah, naahh, prefiero avgtime o timestats, porque times<tab> autocompletaría a timestats. Qué hacés tanto tiempo? Gracias por mencionarme D hace años. Me quedó en la cabeza, y el año pasado cuando empecé un laburo nuevo tuve oportunidad de meterme con D. Saludos Ary, espero que andes bien!! --jm

El nombre lo dije en broma :-P Me sorprendió muchísimo verte en la lista! Pensé "Juanma?". Qué loco que te guste D. A mí me gusta también, pero tiene algunas cosas feas y que lamentablemente no veo que vayan a cambiar pronto... (o nunca). So you are using D for work?

Mar 26 2012

On Thursday, 22 March 2012 at 17:13:58 UTC, Manfred Nowak wrote:Juan Manuel Cabo wrote:like the unix 'time' command

`version linux' is missing. -manfred

Done!, it works in windows now too. (release 0.5 in github). --jm

Mar 26 2012

On Tuesday, 27 March 2012 at 03:39:56 UTC, Ary Manzana wrote:On Tuesday, 27 March 2012 at 01:19:22 UTC, Juan Manuel Cabo wrote:On Tuesday, 27 March 2012 at 00:58:26 UTC, Ary Manzana wrote:

Dude, this is awesome.

Thanks!! I appreciate your feedback!I would suggest changing the name while you still can.

Suggestions welcome!! --jm

give_me_d_average

Hahahah, naahh, prefiero avgtime o timestats, porque times<tab> autocompletaría a timestats. Qué hacés tanto tiempo? Gracias por mencionarme D hace años. Me quedó en la cabeza, y el año pasado cuando empecé un laburo nuevo tuve oportunidad de meterme con D. Saludos Ary, espero que andes bien!! --jm

El nombre lo dije en broma :-P

ahhaha, ya se que lo dijiste en broma! --jm

Mar 26 2012

--f46d04308a28750f3e04bc6557e8 Content-Type: text/plain; charset=ISO-8859-1 On Wed, Mar 21, 2012 at 6:32 PM, Juan Manuel Cabo <juanmanuel.cabo gmail.comwrote:

This is a small util I wrote in D which is like the unix 'time' command but can repeat the command N times and show median, average, standard deviation, minimum and maximum. As you all know, it is not proper to conclude that a program is faster than another program by running them just once. It's BOOST and is in github: https://github.com/jmcabo/**avgtime <https://github.com/jmcabo/avgtime> Example: avgtime -r 10 -q ls -lR /etc ------------------------ Total time (ms): 933.742 Repetitions : 10 Median time : 90.505 Avg time : 93.3742 Std dev. : 4.66808 Minimum : 88.732 Maximum : 101.225 The -q argument pipes stderr and stdout of the program under test to /dev/null I put more info in the github page. HAVE FUN!! --jm

http://www.reddit.com/r/programming/comments/rif9x/uniform_function_call_syntax_for_the_d/c46bjs7?context=2 It'd be neat if it could create comparison statistics between the execution of two different programs so you could compare the performance of changes or alternative approaches to the same problem as I was doing. Regards, Brad Anderson --f46d04308a28750f3e04bc6557e8 Content-Type: text/html; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable On Wed, Mar 21, 2012 at 6:32 PM, Juan Manuel Cabo <span dir=3D"ltr"><<a = href=3D"mailto:juanmanuel.cabo gmail.com">juanmanuel.cabo gmail.com</a>>= </span> wrote:<br><div class=3D"gmail_quote"><blockquote class=3D"gmail_quo= te" style=3D"margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"=

'time' command but can repeat the command N times and show<br> median, average, standard deviation, minimum and maximum.<br> <br> As you all know, it is not proper to conclude that<br> a program is faster than another program by running<br> them just once.<br> <br> It's BOOST and is in github:<br> <br> =A0 =A0<a href=3D"https://github.com/jmcabo/avgtime" target=3D"_blank">htt= ps://github.com/jmcabo/<u></u>avgtime</a><br> <br> Example:<br> <br> <br> =A0 =A0avgtime -r 10 -q ls -lR /etc<br> <br> =A0 =A0------------------------<br> =A0 =A0Total time (ms): 933.742<br> =A0 =A0Repetitions =A0 =A0: 10<br> =A0 =A0Median time =A0 =A0: 90.505<br> =A0 =A0Avg time =A0 =A0 =A0 : 93.3742<br> =A0 =A0Std dev. =A0 =A0 =A0 : 4.66808<br> =A0 =A0Minimum =A0 =A0 =A0 =A0: 88.732<br> =A0 =A0Maximum =A0 =A0 =A0 =A0: 101.225<br> <br> The -q argument pipes stderr and stdout of the program<br> under test to /dev/null<br> <br> I put more info in the github page.<br> <br> <br> HAVE FUN!!<span class=3D"HOEnZb"><font color=3D"#888888"><br> <br> --jm<br> <br> <br> </font></span></blockquote></div><br><div>Nice tool. =A0I used it here:=A0<= a href=3D"http://www.reddit.com/r/programming/comments/rif9x/uniform_functi= on_call_syntax_for_the_d/c46bjs7?context=3D2">http://www.reddit.com/r/progr= amming/comments/rif9x/uniform_function_call_syntax_for_the_d/c46bjs7?contex= t=3D2</a></div> <div><br></div><div>It'd be neat if it could create comparison statisti= cs between the=A0execution=A0of two different programs so you could compare= the performance of changes or alternative approaches to the same problem a= s I was doing.</div> <div><br></div><div>Regards,</div><div>Brad Anderson</div> --f46d04308a28750f3e04bc6557e8--

Mar 29 2012