digitalmars.D.announce - avgtime - Small D util for your everyday benchmarking needs
- Juan Manuel Cabo (23/23) Mar 21 2012 This is a small util I wrote in D which is like the unix
- Tove (6/29) Mar 21 2012 Awesome, I do have a tiny feature request for the next version...
- Juan Manuel Cabo (18/22) Mar 21 2012 Done, I just put it in github. (-d switch).
- Nick Sabalausky (3/25) Mar 21 2012 Oooh, that sounds fantastic!
- Manfred Nowak (3/4) Mar 22 2012 `version linux' is missing.
- Juan Manuel Cabo (4/8) Mar 23 2012 Linux only for now. Will make it work in windows this weekend.
- Juan Manuel Cabo (4/8) Mar 26 2012 Done!, it works in windows now too.
- Andrei Alexandrescu (6/15) Mar 22 2012 Sweet! You may want to also print the mode of the distribution, which is...
- Juan Manuel Cabo (72/79) Mar 22 2012 Thanks for your feedback!
- Andrei Alexandrescu (4/18) Mar 22 2012 I'm actually surprised. I'm working on benchmarking lately and the
- Juan Manuel Cabo (28/34) Mar 23 2012 On Friday, 23 March 2012 at 05:16:20 UTC, Andrei Alexandrescu
- Andrei Alexandrescu (22/33) Mar 23 2012 [snip]
- Juan Manuel Cabo (46/85) Mar 23 2012 In general, I agree with your reasoning. And I appreciate you
- Manfred Nowak (6/8) Mar 23 2012 True, if the thoretical variance of the distribution of T is close to
- Andrei Alexandrescu (5/12) Mar 24 2012 Wait, doesn't a benchmark always measure an algorithm with the same
- Manfred Nowak (16/18) Mar 25 2012 The fact that you formulate as a question indicates that you are unsure
- Marco Leise (5/53) Mar 24 2012 On Gnome I use the cpufreq-applet to change the the frequency governor f...
- Nick Sabalausky (5/83) Mar 22 2012 Wow, that's just fantastic! Really, this should be a standard system too...
- Juan Manuel Cabo (5/9) Mar 23 2012 Thanks for the good vibes!!!!!
- James Miller (7/14) Mar 22 2012 Dude, this is awesome. I tend to just use time, but if I was doing
- Juan Manuel Cabo (4/15) Mar 23 2012 Suggestions welcome!!
- Nick Sabalausky (3/15) Mar 23 2012 "timestats"?
- Ary Manzana (2/15) Mar 26 2012 give_me_d_average
-
Juan Manuel Cabo
(8/34)
Mar 26 2012
Hahahah, naahh, prefiero avgtime o timestats, porque times
- Ary Manzana (7/43) Mar 26 2012 El nombre lo dije en broma :-P
- Juan Manuel Cabo (4/47) Mar 26 2012 [...]
- Manfred Nowak (11/13) Mar 22 2012 In case of this implementation and according to the given link: trivial
- Juan Manuel Cabo (36/48) Mar 23 2012 I'm not printing the population mode, I'm printing the 'sample
- James Miller (5/11) Mar 23 2012 If its possible to calculate it, then you can generate a table at
- Don Clugston (2/45) Mar 23 2012
- Don Clugston (41/92) Mar 23 2012 Aargh, I didn't get around to copying it in. But this should do it.
- Andrei Alexandrescu (5/7) Mar 23 2012 [snip]
- Juan Manuel Cabo (3/10) Mar 23 2012 Great!!! Thank you soo much Don!!!
- Andrei Alexandrescu (6/18) Mar 23 2012 Again, benchmarks I've seen are always asymmetric. Not sure why those
- Don Clugston (9/32) Mar 27 2012 Agreed, I think situations where you would get a normal distribution are...
- Brad Anderson (9/33) Mar 29 2012 Nice tool. I used it here:
This is a small util I wrote in D which is like the unix 'time' command but can repeat the command N times and show median, average, standard deviation, minimum and maximum. As you all know, it is not proper to conclude that a program is faster than another program by running them just once. It's BOOST and is in github: https://github.com/jmcabo/avgtime Example: avgtime -r 10 -q ls -lR /etc ------------------------ Total time (ms): 933.742 Repetitions : 10 Median time : 90.505 Avg time : 93.3742 Std dev. : 4.66808 Minimum : 88.732 Maximum : 101.225 The -q argument pipes stderr and stdout of the program under test to /dev/null I put more info in the github page. HAVE FUN!! --jm
Mar 21 2012
On Thursday, 22 March 2012 at 00:32:31 UTC, Juan Manuel Cabo wrote:This is a small util I wrote in D which is like the unix 'time' command but can repeat the command N times and show median, average, standard deviation, minimum and maximum. As you all know, it is not proper to conclude that a program is faster than another program by running them just once. It's BOOST and is in github: https://github.com/jmcabo/avgtime Example: avgtime -r 10 -q ls -lR /etc ------------------------ Total time (ms): 933.742 Repetitions : 10 Median time : 90.505 Avg time : 93.3742 Std dev. : 4.66808 Minimum : 88.732 Maximum : 101.225 The -q argument pipes stderr and stdout of the program under test to /dev/null I put more info in the github page. HAVE FUN!! --jmAwesome, I do have a tiny feature request for the next version... a commandline switch to enable automatically discarding the first run as an outlier. /Tove
Mar 21 2012
On Thursday, 22 March 2012 at 01:37:19 UTC, Tove wrote:Awesome, I do have a tiny feature request for the next version... a commandline switch to enable automatically discarding the first run as an outlier. /ToveDone, I just put it in github. (-d switch). But maybe you should be looking at the median to ignore outliers. I also added a -p switch to print all the times: ./avgtime -d -q -p -r10 ls -lR /usr/share/doc ------------------------ Total time (ms): 3986.69 Repetitions : 10 Median time : 397.62 Avg time : 398.669 Std dev. : 2.95832 Minimum : 395.633 Maximum : 406.274 Sorted times : [395.633, 396.261, 396.273, 397.413, 397.425, 397.815, 399.321, 399.719, 400.551, 406.274] --jm
Mar 21 2012
"Juan Manuel Cabo" <juanmanuel.cabo gmail.com> wrote in message news:zgjczrnyknqsiylhntui forum.dlang.org...This is a small util I wrote in D which is like the unix 'time' command but can repeat the command N times and show median, average, standard deviation, minimum and maximum. As you all know, it is not proper to conclude that a program is faster than another program by running them just once. It's BOOST and is in github: https://github.com/jmcabo/avgtime Example: avgtime -r 10 -q ls -lR /etc ------------------------ Total time (ms): 933.742 Repetitions : 10 Median time : 90.505 Avg time : 93.3742 Std dev. : 4.66808 Minimum : 88.732 Maximum : 101.225 The -q argument pipes stderr and stdout of the program under test to /dev/null I put more info in the github page. HAVE FUN!!Oooh, that sounds fantastic!
Mar 21 2012
Juan Manuel Cabo wrote:like the unix 'time' command`version linux' is missing. -manfred
Mar 22 2012
On Thursday, 22 March 2012 at 17:13:58 UTC, Manfred Nowak wrote:Juan Manuel Cabo wrote:Linux only for now. Will make it work in windows this weekend. I hope that's what you meant. --jmlike the unix 'time' command`version linux' is missing. -manfred
Mar 23 2012
On Thursday, 22 March 2012 at 17:13:58 UTC, Manfred Nowak wrote:Juan Manuel Cabo wrote:Done!, it works in windows now too. (release 0.5 in github). --jmlike the unix 'time' command`version linux' is missing. -manfred
Mar 26 2012
On 3/21/12 7:32 PM, Juan Manuel Cabo wrote:avgtime -r 10 -q ls -lR /etc ------------------------ Total time (ms): 933.742 Repetitions : 10 Median time : 90.505 Avg time : 93.3742 Std dev. : 4.66808 Minimum : 88.732 Maximum : 101.225Sweet! You may want to also print the mode of the distribution, which is the time of the maximum sample density. http://en.wikipedia.org/wiki/Mode_(statistics) (Warning: nontrivial but informative.) Andrei
Mar 22 2012
On Thursday, 22 March 2012 at 22:22:31 UTC, Andrei Alexandrescu wrote:Sweet! You may want to also print the mode of the distribution, which is the time of the maximum sample density. http://en.wikipedia.org/wiki/Mode_(statistics) (Warning: nontrivial but informative.) AndreiThanks for your feedback!Sweet! You may want to also print the mode of the distribution, [....]Done!. Just pushed it to github. I made a histogram too!! (man, the gaussian curve is everywhere, it never ceases to perplex me). The histogram bins are the most significant digits (three "automatic" levels of precision, with rounding and casting tricks). But I think the most important change is that I'm now showing the 95% and 99% confidence intervals. (For the confidence intervals to mean anything, please everyone, remember to control your variables (don't defrag and benchmark :-) !!) so that apples are still apples and don't become oranges, and make sure N>30). More info on histogram and confidence intervals in the usage help. avgtime -q -h -r400 ls /etc ------------------------ Total time (ms): 2751.96 Repetitions : 400 Sample mode : 6.9 (79 ocurrences) Median time : 6.945 Avg time : 6.8799 Std dev. : 0.93927 Minimum : 3.7 Maximum : 16.36 95% conf.int. : [6.78786, 6.97195] e = 0.0920468 99% conf.int. : [6.75893, 7.00087] e = 0.12097 Histogram : msecs: count normalized bar 3.9: 1 4.0: 1 4.3: 1 4.4: 1 5.2: 1 6.1: 1 6.2: 1 8.1: 1 8.2: 1 8.7: 1 8.8: 1 9.1: 1 11.5: 1 16.3: 1 --jm
Mar 22 2012
On 3/22/12 11:53 PM, Juan Manuel Cabo wrote:On Thursday, 22 March 2012 at 22:22:31 UTC, Andrei Alexandrescu wrote:I'm actually surprised. I'm working on benchmarking lately and the distributions I get are very concentrated around the minimum. AndreiSweet! You may want to also print the mode of the distribution, which is the time of the maximum sample density. http://en.wikipedia.org/wiki/Mode_(statistics) (Warning: nontrivial but informative.) AndreiThanks for your feedback!Sweet! You may want to also print the mode of the distribution, [....]Done!. Just pushed it to github. I made a histogram too!! (man, the gaussian curve is everywhere, it never ceases to perplex me).
Mar 22 2012
On Friday, 23 March 2012 at 05:16:20 UTC, Andrei Alexandrescu wrote: [.....]Well, the shape of the curve depends a lot on how the random noise gets inside the measurement. I like 'ls -lR' because the randomness comes from everywhere, and its quite bell shaped. I guess there is a lot of I/O mess (even if I/O is all cached, there are lots of opportunities for kernel mutexes to mess everything I guess). When testing "/bin/sleep 0.5", it will be quite a pretty boring histogram. And I guess than when testing something thats only CPU bound and doesn't make too much syscalls, the shape is more concentrated in a few values. On the other hand, I'm getting some weird bimodal (two peaks) curves sometimes, like the one I put on the README.md. It's definitely because of my laptop's CPU throttling, because it went away when I disabled it (for the curious ones, in ubuntu 64bit, here is a way to disable throttling (WARNING: might get hot until you undo or reboot): echo 1600000 > /sys/devices/system/cpu/cpu0/cpufreq/scaling_min_freq echo 1600000 > /sys/devices/system/cpu/cpu1/cpufreq/scaling_min_freq (yes my cpu is 1.6GHz, but it rocks). --jm(man, the gaussian curve is everywhere, it never ceases to perplex me).I'm actually surprised. I'm working on benchmarking lately and the distributions I get are very concentrated around the minimum. Andrei
Mar 23 2012
On 3/23/12 3:02 AM, Juan Manuel Cabo wrote:On Friday, 23 March 2012 at 05:16:20 UTC, Andrei Alexandrescu wrote: [.....][snip] Hmm, well the way I see it, the observed measurements have the following composition: X = T + Q + N where T > 0 (a constant) is the "real" time taken by the processing, Q > 0 is the quantization noise caused by the limited resolution of the clock (can be considered 0 if the resolution is much smaller than the actual time), and N is noise caused by a variety of factors (other processes, throttling, interrupts, networking, memory hierarchy effects, and many more). The challenge is estimating T given a bunch of X samples. N can be probably approximated to a Gaussian, although for short timings I noticed it's more like bursts that just cause outliers. But note that N is always positive (therefore not 100% Gaussian), i.e. there's no way to insert some noise that makes the code seem artificially faster. It's all additive. Taking the mode of the distribution will estimate T + mode(N), which is informative because after all there's no way to eliminate noise. However, if the focus is improving T, we want an estimate as close to T as possible. In the limit, taking the minimum over infinitely many measurements of X would yield T. AndreiWell, the shape of the curve depends a lot on how the random noise gets inside the measurement.(man, the gaussian curve is everywhere, it never ceases to perplex me).I'm actually surprised. I'm working on benchmarking lately and the distributions I get are very concentrated around the minimum. Andrei
Mar 23 2012
On Friday, 23 March 2012 at 15:33:18 UTC, Andrei Alexandrescu wrote:On 3/23/12 3:02 AM, Juan Manuel Cabo wrote:In general, I agree with your reasoning. And I appreciate you taking the time to put it so eloquently!! But I think that your considering T as a constant, and preferring the minimum misses something. This might work very well for benchmarking mostly CPU bound processes, but all those other things that you consider noise (disk I/O, network, memory hierarchy, etc.) are part of the elements that make an algorithm or program faster than other, and I would consider them inside T for some applications. Consider the case depicted in this wonderful (ranty) article that was posted elsewhere in this thread: http://zedshaw.com/essays/programmer_stats.html In a part of the article, the guy talks about a system that worked fast most of the time, but would halt for a good 1 or 2 minutes sometimes. The minimum time for such a system might be a few ms, but the standard deviation would be big. This properly shifts the average time away from the minimum. If programA does the same task than programB with less I/O, or with better memory layout, etc. its average will be better, and maybe its timings won't be so spread out. But the minimum will be the same. So, in the end, I'm just happy that I could share this little avgtime with you all, and as usual there is no one-answer fits all. For some applications, the minimum will be enough. For others, it's esential to look at how spread the sample is. On the symmetry/asymmetry of the distribution topic: I realize as you said that T never gets faster than a certain point. But, depending on the nature of the program under test, the good utilization of disk I/O, network, memory, motherboard buses, etc. is what you want inside the test too, and those come with gaussian like noises which might dominate over T or not. A program that avoids that other big noise is a better program (all else the same), so I would tend to consider the whole. Thanks for the eloquency/insightfulness in your post! I'll consider adding chi-squared confidence intervals in the future. (and open to more info or if another distribution might be better). --jmOn Friday, 23 March 2012 at 05:16:20 UTC, Andrei Alexandrescu wrote: [.....][snip] Hmm, well the way I see it, the observed measurements have the following composition: X = T + Q + N where T > 0 (a constant) is the "real" time taken by the processing, Q > 0 is the quantization noise caused by the limited resolution of the clock (can be considered 0 if the resolution is much smaller than the actual time), and N is noise caused by a variety of factors (other processes, throttling, interrupts, networking, memory hierarchy effects, and many more). The challenge is estimating T given a bunch of X samples. N can be probably approximated to a Gaussian, although for short timings I noticed it's more like bursts that just cause outliers. But note that N is always positive (therefore not 100% Gaussian), i.e. there's no way to insert some noise that makes the code seem artificially faster. It's all additive. Taking the mode of the distribution will estimate T + mode(N), which is informative because after all there's no way to eliminate noise. However, if the focus is improving T, we want an estimate as close to T as possible. In the limit, taking the minimum over infinitely many measurements of X would yield T. AndreiWell, the shape of the curve depends a lot on how the random noise gets inside the measurement.(man, the gaussian curve is everywhere, it never ceases to perplex me).I'm actually surprised. I'm working on benchmarking lately and the distributions I get are very concentrated around the minimum. Andrei
Mar 23 2012
Andrei Alexandrescu wrote:In the limit, taking the minimum over infinitely many measurements of X would yield T.True, if the thoretical variance of the distribution of T is close to zero. But horrible wrong, if T depends on an algorithm that is fast only under amortized analysis, because the worst case scenario will be hidden. -manfred
Mar 23 2012
On 3/23/12 5:42 PM, Manfred Nowak wrote:Andrei Alexandrescu wrote:Wait, doesn't a benchmark always measure an algorithm with the same input? For collecting a chart of various inputs, there should be various benchmarks. AndreiIn the limit, taking the minimum over infinitely many measurements of X would yield T.True, if the thoretical variance of the distribution of T is close to zero. But horrible wrong, if T depends on an algorithm that is fast only under amortized analysis, because the worst case scenario will be hidden.
Mar 24 2012
Andrei Alexandrescu wrote:Wait, doesn't a benchmark always measure an algorithm with the same input?The fact that you formulate as a question indicates that you are unsure about the wright answer---me too, but 1) surely one can define a benchmark to have this property. But if one uses this definition, the used input would belong to the benchmark as a description. I have never seen a description of a benchmark including the input, but because I am more interested in theory I may have simply missed such descriptions. 2) if a probilistic algorithms is used, the meaning of input becomes unclear, because the state of the machine influences T. 3) if a heuristic is used by the benchmarked algorithm, then a made up family of benchmarks can "prove" T= O(n*n) for quick sort. -manfred
Mar 25 2012
Am Fri, 23 Mar 2012 09:02:01 +0100 schrieb "Juan Manuel Cabo" <juanmanuel.cabo gmail.com>:On Friday, 23 March 2012 at 05:16:20 UTC, Andrei Alexandrescu wrote: [.....]On Gnome I use the cpufreq-applet to change the the frequency governor from 'ondemand' to 'performance'. That's better than manually setting a minimum frequency. (Alternatively you can set it through the /sys interface.) - Unless this governor is not compiled into the kernel. -- MarcoWell, the shape of the curve depends a lot on how the random noise gets inside the measurement. I like 'ls -lR' because the randomness comes from everywhere, and its quite bell shaped. I guess there is a lot of I/O mess (even if I/O is all cached, there are lots of opportunities for kernel mutexes to mess everything I guess). When testing "/bin/sleep 0.5", it will be quite a pretty boring histogram. And I guess than when testing something thats only CPU bound and doesn't make too much syscalls, the shape is more concentrated in a few values. On the other hand, I'm getting some weird bimodal (two peaks) curves sometimes, like the one I put on the README.md. It's definitely because of my laptop's CPU throttling, because it went away when I disabled it (for the curious ones, in ubuntu 64bit, here is a way to disable throttling (WARNING: might get hot until you undo or reboot): echo 1600000 > /sys/devices/system/cpu/cpu0/cpufreq/scaling_min_freq echo 1600000 > /sys/devices/system/cpu/cpu1/cpufreq/scaling_min_freq (yes my cpu is 1.6GHz, but it rocks). --jm(man, the gaussian curve is everywhere, it never ceases to perplex me).I'm actually surprised. I'm working on benchmarking lately and the distributions I get are very concentrated around the minimum. Andrei
Mar 24 2012
"Juan Manuel Cabo" <juanmanuel.cabo gmail.com> wrote in message news:mytcmgglyntqsoybjcfz forum.dlang.org...On Thursday, 22 March 2012 at 22:22:31 UTC, Andrei Alexandrescu wrote:Wow, that's just fantastic! Really, this should be a standard system tool. I think this guy would be proud: http://zedshaw.com/essays/programmer_stats.htmlSweet! You may want to also print the mode of the distribution, which is the time of the maximum sample density. http://en.wikipedia.org/wiki/Mode_(statistics) (Warning: nontrivial but informative.) AndreiThanks for your feedback!Sweet! You may want to also print the mode of the distribution, [....]Done!. Just pushed it to github. I made a histogram too!! (man, the gaussian curve is everywhere, it never ceases to perplex me). The histogram bins are the most significant digits (three "automatic" levels of precision, with rounding and casting tricks). But I think the most important change is that I'm now showing the 95% and 99% confidence intervals. (For the confidence intervals to mean anything, please everyone, remember to control your variables (don't defrag and benchmark :-) !!) so that apples are still apples and don't become oranges, and make sure N>30). More info on histogram and confidence intervals in the usage help. avgtime -q -h -r400 ls /etc ------------------------ Total time (ms): 2751.96 Repetitions : 400 Sample mode : 6.9 (79 ocurrences) Median time : 6.945 Avg time : 6.8799 Std dev. : 0.93927 Minimum : 3.7 Maximum : 16.36 95% conf.int. : [6.78786, 6.97195] e = 0.0920468 99% conf.int. : [6.75893, 7.00087] e = 0.12097 Histogram : msecs: count normalized bar 3.9: 1 4.0: 1 4.3: 1 4.4: 1 5.2: 1 6.1: 1 6.2: 1 8.1: 1 8.2: 1 8.7: 1 8.8: 1 9.1: 1 11.5: 1 16.3: 1
Mar 22 2012
On Friday, 23 March 2012 at 05:26:54 UTC, Nick Sabalausky wrote:>>Wow, that's just fantastic! Really, this should be a standard system tool. I think this guy would be proud: http://zedshaw.com/essays/programmer_stats.htmlThanks for the good vibes!!!!! Hahahhah, that article is so ffffing hillarious! I love the maddox tone. --jm
Mar 23 2012
On 23 March 2012 17:53, Juan Manuel Cabo <juanmanuel.cabo gmail.com> wrote:But I think the most important change is that I'm now showing the 95% and 99% confidence intervals. (For the confidence intervals to mean anything, please everyone, remember to control your variables (don't defrag and benchmark :-) !!) so that apples are still apples and don't become oranges, and make sure N>30). More info on histogram and confidence intervals in the usage help.Dude, this is awesome. I tend to just use time, but if I was doing anything more complicated, I'd use this. I would suggest changing the name while you still can. avgtime is not that informative a name given that it now does more than just "Average" times. -- James Miller
Mar 22 2012
On Friday, 23 March 2012 at 06:51:48 UTC, James Miller wrote:Dude, this is awesome. I tend to just use time, but if I was doing anything more complicated, I'd use this. I would suggest changing the name while you still can. avgtime is not that informative a name given that it now does more than just "Average" times. -- James MillerDude, this is awesome.Thanks!! I appreciate your feedback!I would suggest changing the name while you still can.Suggestions welcome!! --jm
Mar 23 2012
"Juan Manuel Cabo" <juanmanuel.cabo gmail.com> wrote in message news:bqrlhcggehbrzyuhzjuy forum.dlang.org...On Friday, 23 March 2012 at 06:51:48 UTC, James Miller wrote:"timestats"?Dude, this is awesome. I tend to just use time, but if I was doing anything more complicated, I'd use this. I would suggest changing the name while you still can. avgtime is not that informative a name given that it now does more than just "Average" times. -- James MillerDude, this is awesome.Thanks!! I appreciate your feedback!I would suggest changing the name while you still can.Suggestions welcome!!
Mar 23 2012
On 3/23/12 4:11 PM, Juan Manuel Cabo wrote:On Friday, 23 March 2012 at 06:51:48 UTC, James Miller wrote:give_me_d_averageDude, this is awesome. I tend to just use time, but if I was doing anything more complicated, I'd use this. I would suggest changing the name while you still can. avgtime is not that informative a name given that it now does more than just "Average" times. -- James MillerDude, this is awesome.Thanks!! I appreciate your feedback!I would suggest changing the name while you still can.Suggestions welcome!! --jm
Mar 26 2012
On Tuesday, 27 March 2012 at 00:58:26 UTC, Ary Manzana wrote:On 3/23/12 4:11 PM, Juan Manuel Cabo wrote:Hahahah, naahh, prefiero avgtime o timestats, porque times<tab> autocompletaría a timestats. Qué hacés tanto tiempo? Gracias por mencionarme D hace años. Me quedó en la cabeza, y el año pasado cuando empecé un laburo nuevo tuve oportunidad de meterme con D. Saludos Ary, espero que andes bien!! --jmOn Friday, 23 March 2012 at 06:51:48 UTC, James Miller wrote:give_me_d_averageDude, this is awesome. I tend to just use time, but if I was doing anything more complicated, I'd use this. I would suggest changing the name while you still can. avgtime is not that informative a name given that it now does more than just "Average" times. -- James MillerDude, this is awesome.Thanks!! I appreciate your feedback!I would suggest changing the name while you still can.Suggestions welcome!! --jm
Mar 26 2012
On Tuesday, 27 March 2012 at 01:19:22 UTC, Juan Manuel Cabo wrote:On Tuesday, 27 March 2012 at 00:58:26 UTC, Ary Manzana wrote:El nombre lo dije en broma :-P Me sorprendió muchísimo verte en la lista! Pensé "Juanma?". Qué loco que te guste D. A mí me gusta también, pero tiene algunas cosas feas y que lamentablemente no veo que vayan a cambiar pronto... (o nunca). So you are using D for work?On 3/23/12 4:11 PM, Juan Manuel Cabo wrote:Hahahah, naahh, prefiero avgtime o timestats, porque times<tab> autocompletaría a timestats. Qué hacés tanto tiempo? Gracias por mencionarme D hace años. Me quedó en la cabeza, y el año pasado cuando empecé un laburo nuevo tuve oportunidad de meterme con D. Saludos Ary, espero que andes bien!! --jmOn Friday, 23 March 2012 at 06:51:48 UTC, James Miller wrote:give_me_d_averageDude, this is awesome. I tend to just use time, but if I was doing anything more complicated, I'd use this. I would suggest changing the name while you still can. avgtime is not that informative a name given that it now does more than just "Average" times. -- James MillerDude, this is awesome.Thanks!! I appreciate your feedback!I would suggest changing the name while you still can.Suggestions welcome!! --jm
Mar 26 2012
On Tuesday, 27 March 2012 at 03:39:56 UTC, Ary Manzana wrote:On Tuesday, 27 March 2012 at 01:19:22 UTC, Juan Manuel Cabo wrote:[...] ahhaha, ya se que lo dijiste en broma! --jmOn Tuesday, 27 March 2012 at 00:58:26 UTC, Ary Manzana wrote:El nombre lo dije en broma :-POn 3/23/12 4:11 PM, Juan Manuel Cabo wrote:Hahahah, naahh, prefiero avgtime o timestats, porque times<tab> autocompletaría a timestats. Qué hacés tanto tiempo? Gracias por mencionarme D hace años. Me quedó en la cabeza, y el año pasado cuando empecé un laburo nuevo tuve oportunidad de meterme con D. Saludos Ary, espero que andes bien!! --jmOn Friday, 23 March 2012 at 06:51:48 UTC, James Miller wrote:give_me_d_averageDude, this is awesome. I tend to just use time, but if I was doing anything more complicated, I'd use this. I would suggest changing the name while you still can. avgtime is not that informative a name given that it now does more than just "Average" times. -- James MillerDude, this is awesome.Thanks!! I appreciate your feedback!I would suggest changing the name while you still can.Suggestions welcome!! --jm
Mar 26 2012
Andrei Alexandrescu wrote:You may want to also print the mode of the distribution, nontrivial but informativeIn case of this implementation and according to the given link: trivial and noninformative, because | For samples, if it is known that they are drawn from a symmetric | distribution, the sample mean can be used as an estimate of the | population mode. and the program computes the variance as if the values of the sample follow a normal distribution, which is symmetric. Therefore the mode of the sample is of interest only, when the variance is calculated wrongly. -manfred
Mar 22 2012
On Friday, 23 March 2012 at 05:51:40 UTC, Manfred Nowak wrote:| For samples, if it is known that they are drawn from a symmetric | distribution, the sample mean can be used as an estimate of the | population mode.I'm not printing the population mode, I'm printing the 'sample mode'. It has a very clear meaning: most frequent value. To have frequency, I group into 'bins' by precision: 12.345 and 12.3111 will both go to the 12.3 bin.and the program computes the variance as if the values of the sample follow a normal distribution, which is symmetric.This program doesn't compute the variance. Maybe you are talking about another program. This program computes the standard deviation of the sample. The sample doesn't need to of any distribution to have a standard deviation. It is not a distribution parameter, it is a statistic.Therefore the mode of the sample is of interest only, when the variance is calculated wrongly.??? The 'sample mode', 'median' and 'average' can quickly tell you something about the shape of the histogram, without looking at it. If the three coincide, then maybe you are in normal distribution land. The only place where I assume normal distribution is for the confidence intervals. And it's in the usage help. If you want to support estimating weird probability distributions parameters, forking and pull requests are welcome. Rewrites too. Good luck detecting distribution shapes!!!! ;-)-manfredPS: I should use the t student to make the confidence intervals, and for computing that I should use the sample standard deviation (/n-1), but that is a completely different story. The z normal with n>30 aproximation is quite good. (I would have to embed a table for the t student tail factors, pull reqs velcome). PS2: I now fixed the confusion with the confidence interval of the variable and the confidence interval of the mu average, I simply now show both. (release 0.4). PS3: Statistics estimate distribution parameters. --jm
Mar 23 2012
On 23 March 2012 21:37, Juan Manuel Cabo <juanmanuel.cabo gmail.com> wrote:PS: I should use the t student to make the confidence intervals, and for computing that I should use the sample standard deviation (/n-1), but that is a completely different story. The z normal with n>30 aproximation is quite good. (I would have to embed a table for the t student tail factors, pull reqs velcome).If its possible to calculate it, then you can generate a table at compile-time using CTFE. Less error-prone, and controllable accuracy. -- James Miller
Mar 23 2012
On 23/03/12 09:37, Juan Manuel Cabo wrote:On Friday, 23 March 2012 at 05:51:40 UTC, Manfred Nowak wrote:No, it's easy. Student t is in std.mathspecial.| For samples, if it is known that they are drawn from a symmetric | distribution, the sample mean can be used as an estimate of the | population mode.I'm not printing the population mode, I'm printing the 'sample mode'. It has a very clear meaning: most frequent value. To have frequency, I group into 'bins' by precision: 12.345 and 12.3111 will both go to the 12.3 bin.and the program computes the variance as if the values of the sample follow a normal distribution, which is symmetric.This program doesn't compute the variance. Maybe you are talking about another program. This program computes the standard deviation of the sample. The sample doesn't need to of any distribution to have a standard deviation. It is not a distribution parameter, it is a statistic.Therefore the mode of the sample is of interest only, when the variance is calculated wrongly.??? The 'sample mode', 'median' and 'average' can quickly tell you something about the shape of the histogram, without looking at it. If the three coincide, then maybe you are in normal distribution land. The only place where I assume normal distribution is for the confidence intervals. And it's in the usage help. If you want to support estimating weird probability distributions parameters, forking and pull requests are welcome. Rewrites too. Good luck detecting distribution shapes!!!! ;-)-manfredPS: I should use the t student to make the confidence intervals, and for computing that I should use the sample standard deviation (/n-1), but that is a completely different story. The z normal with n>30 aproximation is quite good. (I would have to embed a table for the t student tail factors, pull reqs velcome).PS2: I now fixed the confusion with the confidence interval of the variable and the confidence interval of the mu average, I simply now show both. (release 0.4). PS3: Statistics estimate distribution parameters. --jm
Mar 23 2012
On 23/03/12 11:20, Don Clugston wrote:On 23/03/12 09:37, Juan Manuel Cabo wrote:Aargh, I didn't get around to copying it in. But this should do it. /** Inverse of Student's t distribution * * Given probability p and degrees of freedom nu, * finds the argument t such that the one-sided * studentsDistribution(nu,t) is equal to p. * * Params: * nu = degrees of freedom. Must be >1 * p = probability. 0 < p < 1 */ real studentsTDistributionInv(int nu, real p ) in { assert(nu>0); assert(p>=0.0L && p<=1.0L); } body { if (p==0) return -real.infinity; if (p==1) return real.infinity; real rk, z; rk = nu; if ( p > 0.25L && p < 0.75L ) { if ( p == 0.5L ) return 0; z = 1.0L - 2.0L * p; z = betaIncompleteInv( 0.5L, 0.5L*rk, fabs(z) ); real t = sqrt( rk*z/(1.0L-z) ); if( p < 0.5L ) t = -t; return t; } int rflg = -1; // sign of the result if (p >= 0.5L) { p = 1.0L - p; rflg = 1; } z = betaIncompleteInv( 0.5L*rk, 0.5L, 2.0L*p ); if (z<0) return rflg * real.infinity; return rflg * sqrt( rk/z - rk ); }On Friday, 23 March 2012 at 05:51:40 UTC, Manfred Nowak wrote:No, it's easy. Student t is in std.mathspecial.| For samples, if it is known that they are drawn from a symmetric | distribution, the sample mean can be used as an estimate of the | population mode.I'm not printing the population mode, I'm printing the 'sample mode'. It has a very clear meaning: most frequent value. To have frequency, I group into 'bins' by precision: 12.345 and 12.3111 will both go to the 12.3 bin.and the program computes the variance as if the values of the sample follow a normal distribution, which is symmetric.This program doesn't compute the variance. Maybe you are talking about another program. This program computes the standard deviation of the sample. The sample doesn't need to of any distribution to have a standard deviation. It is not a distribution parameter, it is a statistic.Therefore the mode of the sample is of interest only, when the variance is calculated wrongly.??? The 'sample mode', 'median' and 'average' can quickly tell you something about the shape of the histogram, without looking at it. If the three coincide, then maybe you are in normal distribution land. The only place where I assume normal distribution is for the confidence intervals. And it's in the usage help. If you want to support estimating weird probability distributions parameters, forking and pull requests are welcome. Rewrites too. Good luck detecting distribution shapes!!!! ;-)-manfredPS: I should use the t student to make the confidence intervals, and for computing that I should use the sample standard deviation (/n-1), but that is a completely different story. The z normal with n>30 aproximation is quite good. (I would have to embed a table for the t student tail factors, pull reqs velcome).
Mar 23 2012
On 3/23/12 5:51 AM, Don Clugston wrote:[snip] Shouldn't put this stuff in std.numeric, or create a std.stat module? I think also some functions for t-test would be useful. AndreiNo, it's easy. Student t is in std.mathspecial.Aargh, I didn't get around to copying it in. But this should do it.
Mar 23 2012
On Friday, 23 March 2012 at 10:51:37 UTC, Don Clugston wrote:Great!!! Thank you soo much Don!!! --jmNo, it's easy. Student t is in std.mathspecial.Aargh, I didn't get around to copying it in. But this should do it. /** Inverse of Student's t distribution * [.....]
Mar 23 2012
On 3/23/12 12:51 AM, Manfred Nowak wrote:Andrei Alexandrescu wrote:Again, benchmarks I've seen are always asymmetric. Not sure why those shown here are symmetric. The mode should be very close to the minimum (and in fact I think taking the minimum is a pretty good approximation of the sought-after time). AndreiYou may want to also print the mode of the distribution, nontrivial but informativeIn case of this implementation and according to the given link: trivial and noninformative, because | For samples, if it is known that they are drawn from a symmetric | distribution, the sample mean can be used as an estimate of the | population mode. and the program computes the variance as if the values of the sample follow a normal distribution, which is symmetric. Therefore the mode of the sample is of interest only, when the variance is calculated wrongly.
Mar 23 2012
On 23/03/12 16:25, Andrei Alexandrescu wrote:On 3/23/12 12:51 AM, Manfred Nowak wrote:Agreed, I think situations where you would get a normal distribution are rare in benchmarking code. Small sections of code always have a best-case scenario, where there are no cache misses. If there are task switches, the best case is zero task switches. If you use the CPU performance counters, you can identify the *cause* of performance variations. When I've done this, I've always been able to get very stable numbersAndrei Alexandrescu wrote:Again, benchmarks I've seen are always asymmetric. Not sure why those shown here are symmetric. The mode should be very close to the minimum (and in fact I think taking the minimum is a pretty good approximation of the sought-after time). AndreiYou may want to also print the mode of the distribution, nontrivial but informativeIn case of this implementation and according to the given link: trivial and noninformative, because | For samples, if it is known that they are drawn from a symmetric | distribution, the sample mean can be used as an estimate of the | population mode. and the program computes the variance as if the values of the sample follow a normal distribution, which is symmetric. Therefore the mode of the sample is of interest only, when the variance is calculated wrongly.
Mar 27 2012
On Wed, Mar 21, 2012 at 6:32 PM, Juan Manuel Cabo <juanmanuel.cabo gmail.comwrote:This is a small util I wrote in D which is like the unix 'time' command but can repeat the command N times and show median, average, standard deviation, minimum and maximum. As you all know, it is not proper to conclude that a program is faster than another program by running them just once. It's BOOST and is in github: https://github.com/jmcabo/**avgtime <https://github.com/jmcabo/avgtime> Example: avgtime -r 10 -q ls -lR /etc ------------------------ Total time (ms): 933.742 Repetitions : 10 Median time : 90.505 Avg time : 93.3742 Std dev. : 4.66808 Minimum : 88.732 Maximum : 101.225 The -q argument pipes stderr and stdout of the program under test to /dev/null I put more info in the github page. HAVE FUN!! --jmNice tool. I used it here: http://www.reddit.com/r/programming/comments/rif9x/uniform_function_call_syntax_for_the_d/c46bjs7?context=2 It'd be neat if it could create comparison statistics between the execution of two different programs so you could compare the performance of changes or alternative approaches to the same problem as I was doing. Regards, Brad Anderson
Mar 29 2012