www.digitalmars.com         C & C++   DMDScript  

digitalmars.D.announce - Blog Post - profiling with perf =?UTF-8?B?YW5kwqBmcmllbmRz?=

reply Martin Nowak <code dawg.eu> writes:
Just a few infos on using perf and related tools for profiling on 
linux.

https://code.dawg.eu/profiling-with-perf-and-friends.html
Dec 25 2016
next sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 12/25/2016 02:16 PM, Martin Nowak wrote:
 Just a few infos on using perf and related tools for profiling on linux.

 https://code.dawg.eu/profiling-with-perf-and-friends.html
That's awesome and very easy to follow, reproduce, and tweak. One thing that would be terrific (also in keep with Vladimir's work https://blog.thecybershadow.net/2015/05/05/is-d-slim-yet/ which btw does not seem to work anymore - clicking on the graph takes to the error page http://digger.k3.1azy.net/trend/) is having an automated tracker of build performance. The tracker would be easily accessible and would flag PRs that decrease compilation speed by more than a tolerance. One OS should be enough for now. We need a clean machine (not shared for many other tasks). The Foundation can provide a Linux machine in my basement. Looking at our A-Team Seb, Vladimir, Martin please let me know if this is something you'd pursue. Two other things. One, can one aggregate outputs from several perf runs? The unittest builds are likely to exercise other corners of the compiler; they're one per module so they'd entail multiple perf runs. Second, in my dreams we should have benchmark unittests in phobos that are run and produce a nice tabular output, for tabulation and plotting. Such output would make it easy to track how performance of the standard library is evolving across releases. Thanks, Andrei
Dec 25 2016
parent Martin Nowak <code dawg.eu> writes:
On Sunday, 25 December 2016 at 20:45:13 UTC, Andrei Alexandrescu 
wrote:
 The tracker would be easily accessible and would flag PRs that 
 decrease compilation speed by more than a tolerance. One OS 
 should be enough for now. We need a clean machine (not shared 
 for many other tasks). The Foundation can provide a Linux 
 machine in my basement. Looking at our A-Team Seb, Vladimir, 
 Martin please let me know if this is something you'd pursue.
Added as feature to our CI ideas. https://trello.com/c/zSnpnhrz/68-benchmark-ci This would be really easy to integrate as separate executor in a reworked Jenkins setup. It would be quite a lot of effort as separate project though. As said in the 17th planning, that's up for January, then we might revisit this in Feb or so. Since benchmarks aren't that critical the slightly reduced reliability of a home server might suffice, performant servers are really cheap though.
Dec 25 2016
prev sibling parent Stefan Koch <uplink.coder googlemail.com> writes:
On Sunday, 25 December 2016 at 19:16:03 UTC, Martin Nowak wrote:
 Just a few infos on using perf and related tools for profiling 
 on linux.

 https://code.dawg.eu/profiling-with-perf-and-friends.html
Nice article. There is a nice gui tool for the same purpose, http://developer.amd.com/tools-and-sdks/opencl-zone/codexl/ I find it easier to use then perf since it lists the useful statistics as a table. Also very useful is valgrind --tool=callgrind and the gui for it kcachegrind. http://kcachegrind.sourceforge.net/ I used it to optimize the newCTFE code with great success. Also I would recommend to compile the executable you are profiling with -g -gc, as you will be able to see which lines of code asm instructions correspond to.
Dec 25 2016