www.digitalmars.com         C & C++   DMDScript  

digitalmars.D - D benchmarks

reply "David Nadlinger" <see klickverbot.at> writes:
Hi all,

I am currently finalizing my material for the LDC DConf talk, and 
I thought it would be nice to include a quick runtime performance 
comparison between the different compilers, just to give the 
audience a general sense of what to expect.

Thus, I am looking for benchmarks to use in the talk. 
Specifically, they should:

  - be open source, or at least source-available, so other people 
can reproduce the results
  - be *reasonably* self-contained, so that I don't have to spend 
three hours setting up build dependencies
  - be written mostly in D, I don't want to benchmark GCC
  - work with DMD 2.061 or DMD 2.062
  - run on Linux or OS X

I already have a few results (Dmitry's std.regex and std.uni 
benchmarks, WebDrake's Dregs, some of my own projects, …), but it 
would be great if some of you could point me to your own set of 
tests so I can hopefully paint a more complete picture.

There is a host of results if you search for »benchmark« here on 
the forums, but many of the discussed test cases are trivial 
micro-benchmarks, and I was hoping to add a few more elaborate 
performance tests to my collection.

In the future – i.e. as soon as possible, but somebody has to 
actually spend some time on setting things up –, we might also 
want to set up a nightly tester with such benchmarks to track 
performance of the different compilers over time. It's not as 
crucial for GDC and LDC as it is for the upstream backend 
projects, but there are still quite a few things to watch out for 
in druntime/Phobos and the LDC LLVM optimizations specific to D.

David



P.S.: Juan Manuel Cabo's "avgtime" is a really, _really_ useful 
tool for benchmarking whole programs and actually getting solid 
statistics out of it. Let's add something similar as a library 
for more finely-grained use!
Mar 10 2013
next sibling parent reply "jerro" <a a.com> writes:
  - be open source, or at least source-available, so other 
 people can reproduce the results
  - be *reasonably* self-contained, so that I don't have to 
 spend three hours setting up build dependencies
  - be written mostly in D, I don't want to benchmark GCC
  - work with DMD 2.061 or DMD 2.062
  - run on Linux or OS X
You could also use pfft, then. This branch is the most up to date: https://github.com/jerro/pfft/tree/use-gcc-udas If you run: ./build ./build --tests It will generate files test/test_float, test/test_double and test/test_real. You can choose compiler with --dc DMD|GDC|LDC. You can use -s flag on test/test_* executables to do a benchmark. Test executables and build.d have --help option. To make the comparison fair, it would probably be good to use --simd sse flag when building tests, because otherwise GDC and LDC versions will use AVX if run on a machine that supports it and DMD won't. I also have test/benchmarks.d script that runs benchmarks and outputs charts, but I made no attempt to make it user friendly and you would need to read and modify its (a bit messy) code to use it - the only option it takes is the location of output files. It depends on Plot2kill.
Mar 10 2013
parent "jerro" <a a.com> writes:
 --simd sse flag when building tests
Should be
 --simd sse flag when building the library.
Mar 10 2013
prev sibling next sibling parent "Zardoz" <luis.panadero gmail.com> writes:
You can try with nBodySim https://github.com/Zardoz89/nBodySim
I wrote it and tested with DMD 2.060, but should be working with 
2.061 or 2.062.
I used it to benchmark "parallel for" vs "serial for" in some 
computers with 2, 4 and 16 cores, getting a speedup like x13 in a 
16 core machine.
It have a small bash script to benchmark, running N times the 
program, and doing a average mean of the total time of execution.

On Sunday, 10 March 2013 at 23:36:26 UTC, David Nadlinger wrote:
 Hi all,

 I am currently finalizing my material for the LDC DConf talk, 
 and I thought it would be nice to include a quick runtime 
 performance comparison between the different compilers, just to 
 give the audience a general sense of what to expect.

 Thus, I am looking for benchmarks to use in the talk. 
 Specifically, they should:

  - be open source, or at least source-available, so other 
 people can reproduce the results
  - be *reasonably* self-contained, so that I don't have to 
 spend three hours setting up build dependencies
  - be written mostly in D, I don't want to benchmark GCC
  - work with DMD 2.061 or DMD 2.062
  - run on Linux or OS X

 I already have a few results (Dmitry's std.regex and std.uni 
 benchmarks, WebDrake's Dregs, some of my own projects, …), but 
 it would be great if some of you could point me to your own set 
 of tests so I can hopefully paint a more complete picture.

 There is a host of results if you search for »benchmark« here 
 on the forums, but many of the discussed test cases are trivial 
 micro-benchmarks, and I was hoping to add a few more elaborate 
 performance tests to my collection.

 In the future – i.e. as soon as possible, but somebody has to 
 actually spend some time on setting things up –, we might also 
 want to set up a nightly tester with such benchmarks to track 
 performance of the different compilers over time. It's not as 
 crucial for GDC and LDC as it is for the upstream backend 
 projects, but there are still quite a few things to watch out 
 for in druntime/Phobos and the LDC LLVM optimizations specific 
 to D.

 David



 P.S.: Juan Manuel Cabo's "avgtime" is a really, _really_ useful 
 tool for benchmarking whole programs and actually getting solid 
 statistics out of it. Let's add something similar as a library 
 for more finely-grained use!
Mar 11 2013
prev sibling next sibling parent Russel Winder <russel winder.org.uk> writes:
On Mon, 2013-03-11 at 00:36 +0100, David Nadlinger wrote:
 Hi all,
=20
 I am currently finalizing my material for the LDC DConf talk, and=20
 I thought it would be nice to include a quick runtime performance=20
 comparison between the different compilers, just to give the=20
 audience a general sense of what to expect.
=20
 Thus, I am looking for benchmarks to use in the talk.=20
 Specifically, they should:
=20
   - be open source, or at least source-available, so other people=20
 can reproduce the results
   - be *reasonably* self-contained, so that I don't have to spend=20
 three hours setting up build dependencies
   - be written mostly in D, I don't want to benchmark GCC
   - work with DMD 2.061 or DMD 2.062
   - run on Linux or OS X
I have a variety of D implementations of "Calculating =CF=80 by quadrature" (including the one David Simcha contributed for testing std.parallelism). This is a trivial, data parallel, embarrassingly parallel problem that I use for comparing scaling in various languages in comparison to C. There is no experiment set up, just single run. It uses SCons for the D build which will require the fork of SCons I have that includes (almost reasonable) support for D build. On the other hand all the files are fundamentally self-contained except for reliance on one module in the same directory so rdmd should work just fine. The code is released under GPLv3 and is available on GitHub: git github.com:russel/Pi_Quadrature.git and my own website, for cloning: http://www.russel.org.uk/Git/Pi_Quadrature.git or for browsing: http://www.russel.org.uk/gitweb/?p=3DPi_Quadrature.git;a=3Dsummary If there are any errors or infelicities of D coding I would be very pleased to hear of them, especially if the come with a pull request! --=20 Russel. =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D Dr Russel Winder t: +44 20 7585 2200 voip: sip:russel.winder ekiga.n= et 41 Buckmaster Road m: +44 7770 465 077 xmpp: russel winder.org.uk London SW11 1EN, UK w: www.russel.org.uk skype: russel_winder
Mar 11 2013
prev sibling next sibling parent "Minas Mina" <minas_mina1990 hotmail.co.uk> writes:
You can use my raytracing in D project.

https://github.com/minas1/D_Raytracing

It's very incomplete at the current state by the way (no soft 
shadows, no texturing, no reflection, no antialiasing).
Mar 11 2013
prev sibling parent Dmitry Olshansky <dmitry.olsh gmail.com> writes:
11-Mar-2013 03:36, David Nadlinger пишет:
 Hi all,

 I am currently finalizing my material for the LDC DConf talk, and I
 thought it would be nice to include a quick runtime performance
 comparison between the different compilers, just to give the audience a
 general sense of what to expect.

 Thus, I am looking for benchmarks to use in the talk. Specifically, they
 should:

   - be open source, or at least source-available, so other people can
 reproduce the results
   - be *reasonably* self-contained, so that I don't have to spend three
 hours setting up build dependencies
   - be written mostly in D, I don't want to benchmark GCC
   - work with DMD 2.061 or DMD 2.062
   - run on Linux or OS X
[snip]
 In the future – i.e. as soon as possible, but somebody has to actually
 spend some time on setting things up –, we might also want to set up a
 nightly tester with such benchmarks to track performance of the
 different compilers over time. It's not as crucial for GDC and LDC as it
 is for the upstream backend projects, but there are still quite a few
 things to watch out for in druntime/Phobos and the LDC LLVM
 optimizations specific to D.
Hopefully sometime soon benchmarks would become part of the auto-tester framework.
 P.S.: Juan Manuel Cabo's "avgtime" is a really, _really_ useful tool for
 benchmarking whole programs and actually getting solid statistics out of
 it. Let's add something similar as a library for more finely-grained use!
+111 -- Dmitry Olshansky
Mar 11 2013