www.digitalmars.com         C & C++   DMDScript  

digitalmars.D - Basic benchmark

reply bearophile <bearophileHUGS lycos.com> writes:
I have adapted another small benchmark to D. This benchmark is less interesting
than the other ones because it mostly tests the optimizations done by the
back-end. This means it's not a problem of the D language or its front-end, so
even if DMD here shows to be not much efficient, LDC once finished may show
significant improvements.
As usual I may have done several errors, so keep your eyes open.

D code:

/*
original code copyright 2004 Christopher W. Cowell-Shah
http://www.cowell-shah.com/research/benchmark/code
other code portions copyright
http://dada.perl.it/shootout/
and Doug Bagley
http://www.bagley.org/~doug/shootout
combined, modified and fixed by Thomas Bruckschlegel -
http://www.tommti-systems.com
*/

import std.c.stdio: printf;
import std.c.time: clock, CLOCKS_PER_SEC, clock_t;


void longArithmetic(long longMin, long longMax) {
	clock_t startTime = clock();

    long longResult = longMin;
    long i = longMin;
	while (i < longMax)	{
		longResult -= i++;
		longResult += i++;
		longResult *= i++;
		longResult /= i++;
	}

    clock_t stopTime = clock();
    double elapsedTime = (stopTime - startTime) / (CLOCKS_PER_SEC / 1000.0);
	printf("Long arithmetic elapsed time: %1.0f ms with longMax %ld\n",
elapsedTime, longMax);
	printf(" i: %ld\n", i);
	printf(" longResult: %ld\n", longResult);
}


void nested_loops(int n) {
	clock_t startTime = clock();
	int a, b, c, d, e, f;
	int x=0;

    for (a=0; a<n; a++)
		for (b=0; b<n; b++)
			for (c=0; c<n; c++)
				for (d=0; d<n; d++)
					for (e=0; e<n; e++)
						for (f=0; f<n; f++)
							x+=a+b+c+d+e+f;

    clock_t stopTime = clock();
    double elapsedTime = (stopTime - startTime) / (CLOCKS_PER_SEC / 1000.0);
	printf("Nested Loop elapsed time: %1.0f ms %d\n", elapsedTime, x);
}

int main() {
    long longMin =     10_000_000_000L;
    long longMax =     11_000_000_000L;

    longArithmetic(longMin, longMax);
    nested_loops(40);
    return 0;
}

------------------------

C code, almost the same (you may need to change LL_FORMAT to make it run
correctly):

/*
original code copyright 2004 Christopher W. Cowell-Shah
http://www.cowell-shah.com/research/benchmark/code
other code portions copyright
http://dada.perl.it/shootout/
and Doug Bagley
http://www.bagley.org/~doug/shootout
combined, modified and fixed by Thomas Bruckschlegel -
http://www.tommti-systems.com
*/

#include "time.h"
#include "stdio.h"

// accopding to your compiler
#define LL_FORMAT "%I64d"
//#define LL_FORMAT "%ld"


void longArithmetic(long long longMin, long long longMax) {
    clock_t startTime = clock();

    long long longResult = longMin;
    long long i = longMin;
    while (i < longMax) {
        longResult -= i++;
        longResult += i++;
        longResult *= i++;
        longResult /= i++;
    }

    clock_t stopTime = clock();
    double elapsedTime = (stopTime - startTime) / (CLOCKS_PER_SEC / 1000.0);
    printf("Long arithmetic elapsed time: %1.0f ms with longMax "LL_FORMAT"\n",
elapsedTime, longMax);
    printf(" i: "LL_FORMAT"\n", i);
    printf(" longResult: "LL_FORMAT"\n", longResult);
}


void nested_loops(int n) {
    clock_t startTime = clock();
    int a, b, c, d, e, f;
    int x=0;

    for (a=0; a<n; a++)
        for (b=0; b<n; b++)
            for (c=0; c<n; c++)
                for (d=0; d<n; d++)
                    for (e=0; e<n; e++)
                        for (f=0; f<n; f++)
                            x+=a+b+c+d+e+f;

    clock_t stopTime = clock();
    double elapsedTime = (stopTime - startTime) / (CLOCKS_PER_SEC / 1000.0);
    printf("Nested Loop elapsed time: %1.0f ms %d\n", elapsedTime, x);
}

int main() {
    long long longMin =     10000000000LL;
    long long longMax =     11000000000LL;

    longArithmetic(longMin, longMax);
    nested_loops(40);
    return 0;
}

-------------------

I have compiled it with GCC and DMD with:
gcc version 4.2.1-dw2 (mingw32-2)
-O3 -s

DMD v1.037
-O -release -inline

---------------------

Timings:

C gcc:
  Long arithmetic: 11.15 s
  Nested Loops: 0.11 s

D dmd:
  Long arithmetic: 63.7 s
  Nested Loops: 6.17 s

Bye,
bearophile
Dec 13 2008
next sibling parent reply Tomas Lindquist Olsen <tomas famolsen.dk> writes:
bearophile wrote:
 I have adapted another small benchmark to D. This benchmark is less
interesting than the other ones because it mostly tests the optimizations done
by the back-end. This means it's not a problem of the D language or its
front-end, so even if DMD here shows to be not much efficient, LDC once
finished may show significant improvements.
 As usual I may have done several errors, so keep your eyes open.
 
..snip..
 
 Timings:
 
 C gcc:
   Long arithmetic: 11.15 s
   Nested Loops: 0.11 s
 
 D dmd:
   Long arithmetic: 63.7 s
   Nested Loops: 6.17 s
 
 Bye,
 bearophile
I tried this out with Tango + DMD 1.033, Tango + LDC r847 and GCC 4.3.2, my timings are as follows, best of three: $ dmd bench.d -O -release -inline long arith: 55630 ms nested loop: 5090 ms $ ldc bench.d -O3 -release -inline long arith: 13870 ms nested loop: 120 ms $ gcc bench.c -O3 -s -fomit-frame-pointer long arith: 13600 ms nested loop: 170 ms My cpu is: Athlon64 X2 3800+
Dec 13 2008
parent reply "Jarrett Billingsley" <jarrett.billingsley gmail.com> writes:
On Sat, Dec 13, 2008 at 11:16 AM, Tomas Lindquist Olsen
<tomas famolsen.dk> wrote:
 I tried this out with Tango + DMD 1.033, Tango + LDC r847 and GCC 4.3.2, my
 timings are as follows, best of three:

 $ dmd bench.d -O -release -inline
 long arith:  55630 ms
 nested loop:  5090 ms


 $ ldc bench.d -O3 -release -inline
 long arith:  13870 ms
 nested loop:   120 ms


 $ gcc bench.c -O3 -s -fomit-frame-pointer
 long arith: 13600 ms
 nested loop:  170 ms


 My cpu is: Athlon64 X2 3800+
Go LDC! I hope bearophile will eventually understand that DMD is not good at optimizing code, and so comparing its output to GCC's is ultimately meaningless.
Dec 13 2008
next sibling parent reply dsimcha <dsimcha yahoo.com> writes:
== Quote from Jarrett Billingsley (jarrett.billingsley gmail.com)'s article
 On Sat, Dec 13, 2008 at 11:16 AM, Tomas Lindquist Olsen
 <tomas famolsen.dk> wrote:
 I tried this out with Tango + DMD 1.033, Tango + LDC r847 and GCC 4.3.2, my
 timings are as follows, best of three:

 $ dmd bench.d -O -release -inline
 long arith:  55630 ms
 nested loop:  5090 ms


 $ ldc bench.d -O3 -release -inline
 long arith:  13870 ms
 nested loop:   120 ms


 $ gcc bench.c -O3 -s -fomit-frame-pointer
 long arith: 13600 ms
 nested loop:  170 ms


 My cpu is: Athlon64 X2 3800+
Go LDC! I hope bearophile will eventually understand that DMD is not good at optimizing code, and so comparing its output to GCC's is ultimately meaningless.
Speaking of LDC, any chance that the exception handling on Win32 gets fixed in the near future? I'd like to start using it, but I work on Windows.
Dec 13 2008
parent reply Christian Kamm <kamm-incasoftware removethis.de> writes:
 Speaking of LDC, any chance that the exception handling on Win32 gets
 fixed in the near future?  
No, unfortunately. It's a problem with LLVM only supporting Dwarf2 exception handling. I'm pretty sure it'd work if we used ELF for the object files and GCC for linking, but Windows people tell me this is hardly acceptable. We won't get 'real' exceptions working on Windows until someone adds SEH support to LLVM. Volunteers?
Dec 13 2008
next sibling parent reply "Bill Baxter" <wbaxter gmail.com> writes:
On Sun, Dec 14, 2008 at 5:13 AM, Christian Kamm
<kamm-incasoftware removethis.de> wrote:
 Speaking of LDC, any chance that the exception handling on Win32 gets
 fixed in the near future?
No, unfortunately. It's a problem with LLVM only supporting Dwarf2 exception handling. I'm pretty sure it'd work if we used ELF for the object files and GCC for linking, but Windows people tell me this is hardly acceptable. We won't get 'real' exceptions working on Windows until someone adds SEH support to LLVM. Volunteers?
Hmm, so what does clang do then? Does it also just fail on Windows? Anyway, I signed up for the clang dev mailing list to ask this question there too... --bb
Dec 13 2008
next sibling parent reply aarti_pl <aarti interia.pl> writes:
Bill Baxter pisze:
 On Sun, Dec 14, 2008 at 5:13 AM, Christian Kamm
 <kamm-incasoftware removethis.de> wrote:
 Speaking of LDC, any chance that the exception handling on Win32 gets
 fixed in the near future?
No, unfortunately. It's a problem with LLVM only supporting Dwarf2 exception handling. I'm pretty sure it'd work if we used ELF for the object files and GCC for linking, but Windows people tell me this is hardly acceptable. We won't get 'real' exceptions working on Windows until someone adds SEH support to LLVM. Volunteers?
Hmm, so what does clang do then? Does it also just fail on Windows? Anyway, I signed up for the clang dev mailing list to ask this question there too... --bb
I don't know how current is web page of clang project, but I found following clang status page: http://clang.llvm.org/cxx_status.html Exception handling is marked over there as "Not started/not evaluated" (see point 15 in status table). BR Marcin Kuszczak (aarti_pl)
Dec 13 2008
parent "Bill Baxter" <wbaxter gmail.com> writes:
On Sun, Dec 14, 2008 at 7:55 AM, aarti_pl <aarti interia.pl> wrote:
 Bill Baxter pisze:
 On Sun, Dec 14, 2008 at 5:13 AM, Christian Kamm
 <kamm-incasoftware removethis.de> wrote:
 Speaking of LDC, any chance that the exception handling on Win32 gets
 fixed in the near future?
No, unfortunately. It's a problem with LLVM only supporting Dwarf2 exception handling. I'm pretty sure it'd work if we used ELF for the object files and GCC for linking, but Windows people tell me this is hardly acceptable. We won't get 'real' exceptions working on Windows until someone adds SEH support to LLVM. Volunteers?
Hmm, so what does clang do then? Does it also just fail on Windows? Anyway, I signed up for the clang dev mailing list to ask this question there too... --bb
I don't know how current is web page of clang project, but I found following clang status page: http://clang.llvm.org/cxx_status.html Exception handling is marked over there as "Not started/not evaluated" (see point 15 in status table).
Ok. A fellow named Sebastian who says he works on the clang C++ also said that it didn't support exceptions in C++. And also that the current C++ support in clang is basically unusable. But anyway, they're going to want exception support sooner or later, too. Maybe there's some way for LDC and clang guys to collaborate or divide up the work getting Windows exceptions into LDC? Or at least work together to get LLVM guys to implement it? --bb
Dec 13 2008
prev sibling parent =?ISO-8859-1?Q?=22J=E9r=F4me_M=2E_Berger=22?= <jeberger free.fr> writes:
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Bill Baxter wrote:
 On Sun, Dec 14, 2008 at 5:13 AM, Christian Kamm
 <kamm-incasoftware removethis.de> wrote:
 Speaking of LDC, any chance that the exception handling on Win32 gets
 fixed in the near future?
No, unfortunately. It's a problem with LLVM only supporting Dwarf2 exception handling. I'm pretty sure it'd work if we used ELF for the object files and GCC for linking, but Windows people tell me this is hardly acceptable. We won't get 'real' exceptions working on Windows until someone adds SEH support to LLVM. Volunteers?
Hmm, so what does clang do then? Does it also just fail on Windows? Anyway, I signed up for the clang dev mailing list to ask this question there too...
And what about llvm-g++? Jerome - -- mailto:jeberger free.fr http://jeberger.free.fr Jabber: jeberger jabber.fr -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.9 (GNU/Linux) iEYEARECAAYFAklEw90ACgkQd0kWM4JG3k9Y+QCgud/k9hLJnTjMNzxknhde3YeG 2uQAninvfjYRgM89xllpxQ4cyTmHowq6 =g3RK -----END PGP SIGNATURE-----
Dec 14 2008
prev sibling next sibling parent reply dsimcha <dsimcha yahoo.com> writes:
== Quote from Christian Kamm (kamm-incasoftware removethis.de)'s article
 Speaking of LDC, any chance that the exception handling on Win32 gets
 fixed in the near future?
No, unfortunately. It's a problem with LLVM only supporting Dwarf2 exception handling. I'm pretty sure it'd work if we used ELF for the object files and GCC for linking, but Windows people tell me this is hardly acceptable.
I think this solution is much better than nothing. I assume it would at least work ok on standalone-type projects.
Dec 13 2008
parent reply Aarti_pl <aarti interia.pl> writes:
dsimcha pisze:
 == Quote from Christian Kamm (kamm-incasoftware removethis.de)'s article
 Speaking of LDC, any chance that the exception handling on Win32 gets
 fixed in the near future?
No, unfortunately. It's a problem with LLVM only supporting Dwarf2 exception handling. I'm pretty sure it'd work if we used ELF for the object files and GCC for linking, but Windows people tell me this is hardly acceptable.
I think this solution is much better than nothing. I assume it would at least work ok on standalone-type projects.
Yeah... Also my thoughts... Additionally maybe there are 3rd party object files converters, and "Windows people" work could be done using them as workaround? BR Marcin Kuszczak (aarti_pl)
Dec 15 2008
parent reply Aarti_pl <aarti interia.pl> writes:
Aarti_pl pisze:
 dsimcha pisze:
 == Quote from Christian Kamm (kamm-incasoftware removethis.de)'s article
 Speaking of LDC, any chance that the exception handling on Win32 gets
 fixed in the near future?
No, unfortunately. It's a problem with LLVM only supporting Dwarf2 exception handling. I'm pretty sure it'd work if we used ELF for the object files and GCC for linking, but Windows people tell me this is hardly acceptable.
I think this solution is much better than nothing. I assume it would at least work ok on standalone-type projects.
Yeah... Also my thoughts... Additionally maybe there are 3rd party object files converters, and "Windows people" work could be done using them as workaround? BR Marcin Kuszczak (aarti_pl)
I found such a converter (GPL licenced): http://agner.org/optimize/#objconv Can anyone comment if such a workaround will solve initial problem? (at least temporary). If the answer is yes, then can we expect adding exception handling for LDC on windows? :-) BR Marcin Kuszczak (aarti_pl)
Dec 16 2008
parent reply Christian Kamm <kamm-incasoftware removethis.de> writes:
Christian Kamm:
 No, unfortunately.
 It's a problem with LLVM only supporting Dwarf2 exception handling. I'm
 pretty sure it'd work if we used ELF for the object files and GCC for
 linking, but Windows people tell me this is hardly acceptable.
dsimcha:
 I think this solution is much better than nothing.  I assume it would
 at least
 work ok on standalone-type projects.
Aarti_pl:
 Yeah... Also my thoughts...
 
 Additionally maybe there are 3rd party object files converters, and
 "Windows people" work could be done using them as workaround?
Aarti_pl:
 I found such a converter (GPL licenced):
 http://agner.org/optimize/#objconv
 
 Can anyone comment if such a workaround will solve initial problem? (at
 least temporary).
I doubt it. This utility strips incompatible debug and exception handling information by default and I don't know what happens if you tell it not to. It's pretty likely the runtime won't find the tables in the foreign object format. Also, you'd still need GCC's dwarf2 unwinding runtime.
Dec 19 2008
parent reply aarti_pl <aarti interia.pl> writes:
Christian Kamm pisze:
 Christian Kamm:
 No, unfortunately.
 It's a problem with LLVM only supporting Dwarf2 exception handling. I'm
 pretty sure it'd work if we used ELF for the object files and GCC for
 linking, but Windows people tell me this is hardly acceptable.
dsimcha:
 I think this solution is much better than nothing.  I assume it would
 at least
 work ok on standalone-type projects.
Aarti_pl:
 Yeah... Also my thoughts...

 Additionally maybe there are 3rd party object files converters, and
 "Windows people" work could be done using them as workaround?
Aarti_pl:
 I found such a converter (GPL licenced):
 http://agner.org/optimize/#objconv

 Can anyone comment if such a workaround will solve initial problem? (at
 least temporary).
I doubt it. This utility strips incompatible debug and exception handling information by default and I don't know what happens if you tell it not to. It's pretty likely the runtime won't find the tables in the foreign object format. Also, you'd still need GCC's dwarf2 unwinding runtime.
Well, I am not very familiar with internals of compilers. I just would like to put my hands on fully working LDC on windows :-) Just one more thought: Agner Fog seems to live in Copenhagen. Maybe it would be good idea to contact with him? Especially for Thomas :-) Anyway, thanks for your great work. BR Marcin Kuszczak (aarti_pl)
Dec 19 2008
parent reply Don <nospam nospam.com> writes:
aarti_pl wrote:
 Christian Kamm pisze:
 Christian Kamm:
 No, unfortunately.
 It's a problem with LLVM only supporting Dwarf2 exception 
 handling. I'm
 pretty sure it'd work if we used ELF for the object files and GCC for
 linking, but Windows people tell me this is hardly acceptable.
dsimcha:
 I think this solution is much better than nothing.  I assume it would
 at least
 work ok on standalone-type projects.
Aarti_pl:
 Yeah... Also my thoughts...

 Additionally maybe there are 3rd party object files converters, and
 "Windows people" work could be done using them as workaround?
Aarti_pl:
 I found such a converter (GPL licenced):
 http://agner.org/optimize/#objconv

 Can anyone comment if such a workaround will solve initial problem? (at
 least temporary).
I doubt it. This utility strips incompatible debug and exception handling information by default and I don't know what happens if you tell it not to. It's pretty likely the runtime won't find the tables in the foreign object format. Also, you'd still need GCC's dwarf2 unwinding runtime.
Well, I am not very familiar with internals of compilers. I just would like to put my hands on fully working LDC on windows :-) Just one more thought: Agner Fog seems to live in Copenhagen. Maybe it would be good idea to contact with him? Especially for Thomas :-)
I'm in contact with him (I contributed to the latest objconv). But don't expect too much -- objconv doesn't do much more than DDL. Adding exception support to LLVM is probably *much* easier than converting the exception support in a compiled object file.
Dec 19 2008
parent reply Walter Bright <newshound1 digitalmars.com> writes:
Don wrote:
 Adding exception support to LLVM is probably *much* easier than 
 converting the exception support in a compiled object file.
There's no way to add it to a compiled object file. The schemes are completely different, and interact with the rest of the code generation. Might as well try to turn hamburger back into a cow.
Dec 20 2008
parent Christian Kamm <kamm-incasoftware removethis.de> writes:
Don wrote:
 Adding exception support to LLVM is probably *much* easier than
 converting the exception support in a compiled object file.
Walter Bright wrote:
 There's no way to add it to a compiled object file. The schemes are
 completely different, and interact with the rest of the code generation.
 Might as well try to turn hamburger back into a cow.
Yes. The most sensible approach would be adding SEH support to LLVM. Neither Tomas nor me are planning to do it though. We hope that someone who actually develops on Windows will volunteer. As clang and llvm-gcc would also benefit from it, this might do well as a Summer of Code project at LLVM. From what I hear exception support in LLVM is due for a revamp anyway, so anyone attempting to do this would probably get a chance to help redesign the infrastructure and be well supported by the LLVM devs.
Dec 20 2008
prev sibling parent Mosfet <mosfet anonymous.org> writes:
Christian Kamm wrote:
 Speaking of LDC, any chance that the exception handling on Win32 gets
 fixed in the near future?  
No, unfortunately. It's a problem with LLVM only supporting Dwarf2 exception handling. I'm pretty sure it'd work if we used ELF for the object files and GCC for linking, but Windows people tell me this is hardly acceptable. We won't get 'real' exceptions working on Windows until someone adds SEH support to LLVM. Volunteers?
It's in progress for GCC so maybe it will help to get them on LLVM
Dec 15 2008
prev sibling next sibling parent reply Jason House <jason.james.house gmail.com> writes:
Jarrett Billingsley wrote:

 I hope bearophile will eventually understand that DMD is not good at
 optimizing code, and so comparing its output to GCC's is ultimately
 meaningless.
Personally, I appreciate seeing this stuff from bearophile. I use D in ways where speed really does count. One of my draws to D was that it was a systems language that could be faster than something like Java. I also was sick of C++ and its problems, such as code that requires workarounds for compiler bugs or lack of compiler optimization. It's really sad to see D requiring the same kind of stuff. For D to become as mainstream as C++, all of this stuff that bearophile posts must be fixed.
Dec 13 2008
next sibling parent reply "Jarrett Billingsley" <jarrett.billingsley gmail.com> writes:
On Sat, Dec 13, 2008 at 12:55 PM, Jason House
<jason.james.house gmail.com> wrote:
 Jarrett Billingsley wrote:

 I hope bearophile will eventually understand that DMD is not good at
 optimizing code, and so comparing its output to GCC's is ultimately
 meaningless.
Personally, I appreciate seeing this stuff from bearophile. I use D in ways where speed really does count. One of my draws to D was that it was a systems language that could be faster than something like Java. I also was sick of C++ and its problems, such as code that requires workarounds for compiler bugs or lack of compiler optimization. It's really sad to see D requiring the same kind of stuff. For D to become as mainstream as C++, all of this stuff that bearophile posts must be fixed.
Walter is the only one who can make DMD faster, and I think his time is much better spent on designing and maintaining the language. The reference compiler is just supposed to be _correct_, not necessarily _fast_. If Walter spent all his time working on making the the DMDFE optimizer better and making DMD backend produce faster code, he wouldn't have time to work on the language anymore, and it would be duplicated effort since GDC and LDC already do it better.
Dec 13 2008
next sibling parent Fawzi Mohamed <fmohamed mac.com> writes:
On 2008-12-13 19:07:09 +0100, "Jarrett Billingsley" 
<jarrett.billingsley gmail.com> said:

 On Sat, Dec 13, 2008 at 12:55 PM, Jason House
 <jason.james.house gmail.com> wrote:
 Jarrett Billingsley wrote:
 
 I hope bearophile will eventually understand that DMD is not good at
 optimizing code, and so comparing its output to GCC's is ultimately
 meaningless.
Personally, I appreciate seeing this stuff from bearophile. I use D in ways where speed really does count. One of my draws to D was that it was a systems language that could be faster than something like Java. I also was sick of C++ and its problems, such as code that requires workarounds for compiler bugs or lack of compiler optimization. It's really sad to see D requiring the same kind of stuff. For D to become as mainstream as C++, all of this stuff that bearophile posts must be fixed.
Walter is the only one who can make DMD faster, and I think his time is much better spent on designing and maintaining the language. The reference compiler is just supposed to be _correct_, not necessarily _fast_. If Walter spent all his time working on making the the DMDFE optimizer better and making DMD backend produce faster code, he wouldn't have time to work on the language anymore, and it would be duplicated effort since GDC and LDC already do it better.
I fully agree, and it is not that DMD is necessarily slow, but does not perform some kinds of optimizations. For example for the nested loops it does not float the operations out of the internal loop to as high up as possible. I would like for this to be the case (for example my multidimensional array library would profit from this), but if you really see that in your code it becomes and issue (looking at profiling) then normally it is quite easy to rewrite it so that it is fast. Just looking at very specific benchmarks that test one kind of optimization can be very misleading. It is good to have benchmarks and know where the weaknesses of a compiler are, but for real code the situation is different. At least for the code that I write, and typical code I have seen DMD is reasonably competitive. (this does not mean that it can't and shouldn't be improved ;) Fawzi
Dec 13 2008
prev sibling parent reply Walter Bright <newshound1 digitalmars.com> writes:
Jarrett Billingsley wrote:
 Walter is the only one who can make DMD faster, and I think his time
 is much better spent on designing and maintaining the language.  The
 reference compiler is just supposed to be _correct_, not necessarily
 _fast_.  If Walter spent all his time working on making the the DMDFE
 optimizer better and making DMD backend produce faster code, he
 wouldn't have time to work on the language anymore, and it would be
 duplicated effort since GDC and LDC already do it better.
I haven't worked on the code generator / optimizer, other than fixing bugs, since about 1995. While there are obviously specific cases where it could do better, overall it still does a good job. In fact, for a By "good job", I mean that overall it's within 10%. But there are other reasons to keep the back end. Sometimes, I need to tweak it to support something specific. For example, 1. the stuff to hook together module constructors 2. thread local storage 3. position independent code 4. support for various function call sequences 5. D symbolic debug info 6. Generating libraries directly 7. Breaking a module up into multiple object files and coming soon: 8. insertion of memory fences Other possibilities are D specific optimizations, like taking advantage of immutability and purity, that I doubt exist in a back end designed for C/C++. While of course all this can be added to any back end, I understand how to do it to mine, and it would take me a lot of time to understand another well enough to be able to know just where to put the fix in. Another thing I'd be very unwilling to give up on with the dmd back end is how fast it is. DMC is *still* by far the fastest compiler out there.
Dec 13 2008
next sibling parent reply bearophile <bearophileHUGS lycos.com> writes:
Walter Bright:
that I doubt exist in a back end designed for C/C++.<
But note this has the disadvantage of making less easy to adapt a backend (like LLVM) to D. This may reduce or slow down the diffusion of D to other compilers and platforms. So every one of such special feature backend has to chosen with care. In the near term most people may chose LLVM as the most used backend for the D language, so if LLVM isn't able to support some of those features (but exceptions are too much basic, they will be necessary), such features will become virtually absent from D programs you can see around and in the end from D language itself. Bye, bearophile
Dec 14 2008
parent reply dennis luehring <dl.soluz gmx.net> writes:
bearophile schrieb:
 Walter Bright:
that I doubt exist in a back end designed for C/C++.<
But note this has the disadvantage of making less easy to adapt a backend (like LLVM) to D. This may reduce or slow down the diffusion of D to other compilers and platforms. So every one of such special feature backend has to chosen with care. In the near term most people may chose LLVM as the most used backend for the D language, so if LLVM isn't able to support some of those features (but exceptions are too much basic, they will be necessary), such features will become virtually absent from D programs you can see around and in the end from D language itself. Bye, bearophile
his own backend is better for the evolution of D - and THAT is what we want - there a serveral good(and even bad) compiler-backends out there but the language is still missing the backend-problems will be addressed later - and there won't be a show stopper not in LLVM, GCC or .NET ...
Dec 14 2008
parent reply bearophile <bearophileHUGS lycos.com> writes:
dennis luehring:
 his own backend is better for the evolution of D<
I don't understand most of your post, sorry. For example are you here saying that the backend of DMD is better for the future evolution of D? This sounds false (No 64 bit, no much changes for many years, etc). Bye, bearophile
Dec 14 2008
parent reply dennis luehring <dl.soluz gmx.net> writes:
bearophile schrieb:
 dennis luehring:
 his own backend is better for the evolution of D<
I don't understand most of your post, sorry. For example are you here saying that the backend of DMD is better for the future evolution of D? This sounds false (No 64 bit, no much changes for many years, etc). Bye, bearophile
better for the future evolution
i mean current ... my target is the language D itself - it is much easier for walter to work the ideas out in his own backend because he known exactly how it works - so the speed of integrating language features for our tryout is much higher (and thats good for language evolution) - the usage of an "better" backend in the current phase of D2/(3) evolution will be a great slowdown - you can see the amount of work and the hard way to keep it up to date in the gdc/ldc implementation the language must become greate first - backends and poeple who are interested in maintance them will come...
Dec 14 2008
parent dennis luehring <dl.soluz gmx.net> writes:
 the language must become greate first - backends and poeple who are 
 interested in maintance them will come...
my hope personal (freaky) hope is that Intel or AMD getting interested in D or maybe the CodePlay guys... :-)
Dec 14 2008
prev sibling parent reply "Bill Baxter" <wbaxter gmail.com> writes:
On Sun, Dec 14, 2008 at 3:22 PM, Walter Bright
<newshound1 digitalmars.com> wrote:
 Jarrett Billingsley wrote:
 Walter is the only one who can make DMD faster, and I think his time
 is much better spent on designing and maintaining the language.  The
 reference compiler is just supposed to be _correct_, not necessarily
 _fast_.  If Walter spent all his time working on making the the DMDFE
 optimizer better and making DMD backend produce faster code, he
 wouldn't have time to work on the language anymore, and it would be
 duplicated effort since GDC and LDC already do it better.
But there are other reasons to keep the back end. Sometimes, I need to tweak it to support something specific. For example, 1. the stuff to hook together module constructors 2. thread local storage 3. position independent code 4. support for various function call sequences 5. D symbolic debug info 6. Generating libraries directly 7. Breaking a module up into multiple object files and coming soon: 8. insertion of memory fences Other possibilities are D specific optimizations, like taking advantage of immutability and purity, that I doubt exist in a back end designed for C/C++.
Of course that back end was also designed for C/C++ originally, right? But anyway, I agree with bearophile, that requiring too many special features out of a back end will make it hard for any alternative D compilers to keep up.
 While of course all this can be added to any back end, I understand how to
 do it to mine, and it would take me a lot of time to understand another well
 enough to be able to know just where to put the fix in.
That's understandable, but at some point it becomes worth the effort to learn something new. Many people get by just fine using C++. They may be interested in D, but it just takes too much effort. However, a little effort invested in learning D pays off (at least we all believe so or we wouldn't be here). Likewise, if there were a really solid well-maintained back end with a liberal open source license that generates great code, it would very likely be worth your time to learn it, even though it might be rough going in the short term.
 Another thing I'd be very unwilling to give up on with the dmd back end is
 how fast it is. DMC is *still* by far the fastest compiler out there.
I'd gladly trade fast compilation for "has a future" or "supports 64-bit architectures" or "generates faster code" or "doesn't crash when there are too many fixups in main()". Have you seen the messages about how long it can take to compile DWT applications? DWT progs are already desperately in need of some smarter dependency tracking and ability to do minimal recompilations. I think implementing that (in a build system or whatever) would more than make up for the loss in raw compilation speed. Besides, I think a chunk of the the compilation speed is thanks to the module system, and avoiding the endless reparsing required for C++ #includes. So any D compiler should benefit. Anyone have the data for the time required to compile tango with DMD vs LDC? It would be interesting to see how bad the difference is. Anyway, all that said, it's not clear that we really do have that mythical "uber backend" available right now. According to my conversations on the clang mailing list, the current target is for LLVM to be able to fully support a C++ compiler by 2010. I'm not quite sure what all that involves, but apparently it includes things like making exceptions work on Windows. So it certainly does look a bit premature to move over to LLVM as the primary platform for D at this point. --bb
Dec 14 2008
next sibling parent reply Walter Bright <newshound1 digitalmars.com> writes:
Bill Baxter wrote:
 Of course that back end was also designed for C/C++ originally, right?
Pretty much all of them are.
 But anyway, I agree with bearophile, that requiring too many special
 features out of a back end will make it hard for any alternative D
 compilers to keep up.
I'm aware of that, but I'm also aware of the crippling workarounds cfront had to use to avoid changing *anything* in the back end, because cfront had no control over them. There are still some suboptimal things in C++ due to trying to avoid changing the rest of the tool chain.
 While of course all this can be added to any back end, I understand how to
 do it to mine, and it would take me a lot of time to understand another well
 enough to be able to know just where to put the fix in.
That's understandable, but at some point it becomes worth the effort to learn something new. Many people get by just fine using C++. They may be interested in D, but it just takes too much effort. However, a little effort invested in learning D pays off (at least we all believe so or we wouldn't be here). Likewise, if there were a really solid well-maintained back end with a liberal open source license that generates great code, it would very likely be worth your time to learn it, even though it might be rough going in the short term.
Such doesn't exist, however. I remember efforts back in the early 80's to build one (the PCC, for example).
 Another thing I'd be very unwilling to give up on with the dmd back end is
 how fast it is. DMC is *still* by far the fastest compiler out there.
I'd gladly trade fast compilation for "has a future" or "supports 64-bit architectures" or "generates faster code" or "doesn't crash when there are too many fixups in main()". Have you seen the messages about how long it can take to compile DWT applications? DWT progs are already desperately in need of some smarter dependency tracking and ability to do minimal recompilations. I think implementing that (in a build system or whatever) would more than make up for the loss in raw compilation speed. Besides, I think a chunk of the the compilation speed is thanks to the module system, and avoiding the endless reparsing required for C++ #includes. So any D compiler should benefit.
DMC is the fastest C/C++ compiler. DMD benefits from much of the work that went in to make it fast. I did design the semantics of D to favor fast parsing speeds, but there's still the back end speed which has nothing to do with parsing semantics. I found out yesterday that gcc still generates *text* assembler files which are then fed to the assembler for all compiles. That just cannot be made to be speed competitive.
 Anyone have the data for the time required to compile tango with DMD
 vs LDC?  It would be interesting to see how bad the difference is.
 
 Anyway, all that said,  it's not clear that we really do have that
 mythical "uber backend" available right now.
 
 According to my conversations on the clang mailing list, the current
 target is for LLVM to be able to fully support a C++ compiler by 2010.
  I'm not quite sure what all that involves, but apparently it includes
 things like making exceptions work on Windows.  So it certainly does
 look a bit premature to move over to LLVM as the primary platform for
 D at this point.
Abandoning dmd's back end now then would entail a 2 year delay with no updates, and I guarantee that there'll be years of wringing bugs out of LLVM. Writing a cg for a complex instruction set like the x86 is, well, pretty complicated <g> with thousands of special cases. One thing that made D possible was I was able to use a mature, professional quality, debugged optimizer and back end. The lack of that has killed many otherwise promising languages in the past.
Dec 14 2008
next sibling parent dsimcha <dsimcha yahoo.com> writes:
== Quote from Walter Bright (newshound1 digitalmars.com)'s article
 Abandoning dmd's back end now then would entail a 2 year delay with no
 updates, and I guarantee that there'll be years of wringing bugs out of
 LLVM. Writing a cg for a complex instruction set like the x86 is, well,
 pretty complicated <g> with thousands of special cases.
 One thing that made D possible was I was able to use a mature,
 professional quality, debugged optimizer and back end. The lack of that
 has killed many otherwise promising languages in the past.
I do agree to a large extent with the argument that Walter's time is better spent on the language itself rather than on messing with compiler back ends, but just to play devil's advocate: What happens when x86-32 is irrelevant because everyone's using 64-bit? Could DMD eventually be made to support x86-64 codegen w/o too much work, given that it already supports x86-32? How much longer do others on this newsgroup think x86-32 will be the dominant compiler target?
Dec 14 2008
prev sibling parent reply "Bill Baxter" <wbaxter gmail.com> writes:
On Mon, Dec 15, 2008 at 11:37 AM, Walter Bright
<newshound1 digitalmars.com> wrote:
 Bill Baxter wrote:
 Of course that back end was also designed for C/C++ originally, right?
Pretty much all of them are.
 But anyway, I agree with bearophile, that requiring too many special
 features out of a back end will make it hard for any alternative D
 compilers to keep up.
I'm aware of that, but I'm also aware of the crippling workarounds cfront had to use to avoid changing *anything* in the back end, because cfront had no control over them. There are still some suboptimal things in C++ due to trying to avoid changing the rest of the tool chain.
 While of course all this can be added to any back end, I understand how
 to
 do it to mine, and it would take me a lot of time to understand another
 well
 enough to be able to know just where to put the fix in.
That's understandable, but at some point it becomes worth the effort to learn something new. Many people get by just fine using C++. They may be interested in D, but it just takes too much effort. However, a little effort invested in learning D pays off (at least we all believe so or we wouldn't be here). Likewise, if there were a really solid well-maintained back end with a liberal open source license that generates great code, it would very likely be worth your time to learn it, even though it might be rough going in the short term.
Such doesn't exist, however. I remember efforts back in the early 80's to build one (the PCC, for example).
 Another thing I'd be very unwilling to give up on with the dmd back end
 is
 how fast it is. DMC is *still* by far the fastest compiler out there.
I'd gladly trade fast compilation for "has a future" or "supports 64-bit architectures" or "generates faster code" or "doesn't crash when there are too many fixups in main()". Have you seen the messages about how long it can take to compile DWT applications? DWT progs are already desperately in need of some smarter dependency tracking and ability to do minimal recompilations. I think implementing that (in a build system or whatever) would more than make up for the loss in raw compilation speed. Besides, I think a chunk of the the compilation speed is thanks to the module system, and avoiding the endless reparsing required for C++ #includes. So any D compiler should benefit.
DMC is the fastest C/C++ compiler. DMD benefits from much of the work that went in to make it fast. I did design the semantics of D to favor fast parsing speeds, but there's still the back end speed which has nothing to do with parsing semantics. I found out yesterday that gcc still generates *text* assembler files which are then fed to the assembler for all compiles. That just cannot be made to be speed competitive.
 Anyone have the data for the time required to compile tango with DMD
 vs LDC?  It would be interesting to see how bad the difference is.

 Anyway, all that said,  it's not clear that we really do have that
 mythical "uber backend" available right now.

 According to my conversations on the clang mailing list, the current
 target is for LLVM to be able to fully support a C++ compiler by 2010.
  I'm not quite sure what all that involves, but apparently it includes
 things like making exceptions work on Windows.  So it certainly does
 look a bit premature to move over to LLVM as the primary platform for
 D at this point.
Abandoning dmd's back end now then would entail a 2 year delay with no updates, and I guarantee that there'll be years of wringing bugs out of LLVM. Writing a cg for a complex instruction set like the x86 is, well, pretty complicated <g> with thousands of special cases.
Right. I was agreeing with you there (or you are agreeing with me there). From the 2010 figure the clang guys gave me it indeed sounds like LLVM will not be viable as D's *primary* backend for at least two years. I'm perfectly happy to accept reasonable arguments that the current alternatives are not good enough yet (LLVM) or have unacceptable licensing terms (GCC). But arguing that it would take too much time to learn something new is not convincing to me. Nor is an argument that the backend needs special feature X. If the back end is really open source, then maintainers should not object to the addition of features needed by a hosted language -- as long as those features do not interfere with other hosted languages, and I see no reason why they should. --bb
Dec 14 2008
parent Walter Bright <newshound1 digitalmars.com> writes:
Bill Baxter wrote:
 I'm perfectly happy to accept reasonable arguments that the current
 alternatives are not good enough yet (LLVM) or have unacceptable
 licensing terms (GCC).
 But arguing that it would take too much time to learn something new is
 not convincing to me.  Nor is an argument that the backend needs
 special feature X.  If the back end is really open source, then
 maintainers should not object to the addition of features needed by a
 hosted language -- as long as those features do not interfere with
 other hosted languages, and I see no reason why they should.
Controlling the back end also enables dmd to do some fun things like generate libraries directly - not only does this dramatically speed up library builds, it increases the granularity by building multiple object files per module. I'd like to have it do a link, too, so dmd could directly generate executables!
Dec 14 2008
prev sibling next sibling parent reply naryl <cyNOSPAM ngs.ru> writes:
 Anyone have the data for the time required to compile tango with DMD
 vs LDC?  It would be interesting to see how bad the difference is.
Compiling tango-user-{ldc,dmd} DMD - 20.950s LDC - 34.891s
Dec 14 2008
parent "Bill Baxter" <wbaxter gmail.com> writes:
On Mon, Dec 15, 2008 at 12:37 PM, naryl <cyNOSPAM ngs.ru> wrote:
 Anyone have the data for the time required to compile tango with DMD
 vs LDC?  It would be interesting to see how bad the difference is.
Compiling tango-user-{ldc,dmd} DMD - 20.950s LDC - 34.891s
Thanks for the data. Seems not so bad to me. Could be better, but could be a lot worse. --bb
Dec 14 2008
prev sibling parent reply Don <nospam nospam.com> writes:
Bill Baxter wrote:
 Anyway, all that said,  it's not clear that we really do have that
 mythical "uber backend" available right now.
 
 According to my conversations on the clang mailing list, the current
 target is for LLVM to be able to fully support a C++ compiler by 2010.
  I'm not quite sure what all that involves, but apparently it includes
 things like making exceptions work on Windows.
I wonder if there's any chance of getting a LLVM D compiler working before the LLVM C++ compiler works? <g>
Dec 16 2008
next sibling parent "Bill Baxter" <wbaxter gmail.com> writes:
On Tue, Dec 16, 2008 at 9:43 PM, Don <nospam nospam.com> wrote:
 Bill Baxter wrote:
 Anyway, all that said,  it's not clear that we really do have that
 mythical "uber backend" available right now.

 According to my conversations on the clang mailing list, the current
 target is for LLVM to be able to fully support a C++ compiler by 2010.
  I'm not quite sure what all that involves, but apparently it includes
 things like making exceptions work on Windows.
I wonder if there's any chance of getting a LLVM D compiler working before the LLVM C++ compiler works? <g>
Sounds to me like LDC is already ahead of clang's C++. I actually asked the same question over on the list "could it be that LDC is already the most advanced compiler availble on the LLVM platform?" One guy answered "No, there's llvm-g++", but another guy answered "it depends on whether you count llvm-g++ as an LLVM-based compiler or not". I'm not sure what llvm-g++ is, but from that I'm guessing maybe it's an llvm front end with a g++ back-end. In which case, I wouldn't really count it. But there are a lot of LLVM projects listed here: http://llvm.org/ProjectsWithLLVM/ Maybe one of those is more advanced than LDC, not that "advanced" has a very specific meaning anyway. LDC should definitely be on that list, though. --bb
Dec 16 2008
prev sibling next sibling parent Brad Roberts <braddr puremagic.com> writes:
 Sounds to me like LDC is already ahead of clang's C++.
 I actually asked the same question over on the list "could it be that
 LDC is already the most advanced compiler availble on the LLVM
 platform?"  One guy answered "No, there's llvm-g++", but another guy
 answered "it depends on whether you count llvm-g++ as an LLVM-based
 compiler or not".    I'm not sure what llvm-g++ is, but from that I'm
 guessing maybe it's an llvm front end with a g++ back-end.  In which
 case, I wouldn't really count it.
 
 But there are a lot of LLVM projects listed here:
 http://llvm.org/ProjectsWithLLVM/
 Maybe one of those is more advanced than LDC, not that "advanced" has
 a very specific meaning anyway.
 
 LDC should definitely be on that list, though.
 
 --bb
llvm-gcc and -g++ are the gcc/g++ front ends bolted onto the llvm middle/backends. So in that respect, almost identical to dmd's fe bolted onto llvm. The major difference being that llvm-gcc/g++ are complete (as far as gcc and llvm are complete) There used to be a C backend to llvm, but that was abandoned a year or two ago, if I recall correctly. As far as I know, there's never been a c++ backend, nor any use of gcc's backends with llvm. Since LDC isn't re-implementing the frontend of d, just splicing dmd's onto llvm and that clang is still implementing both c and c++, yes, ldc is further along in some ways than clang is. But it's not exactly an apples to apples comparison (please pardon the pun). Later, Brad
Dec 16 2008
prev sibling parent "Bill Baxter" <wbaxter gmail.com> writes:
On Wed, Dec 17, 2008 at 12:36 PM, Brad Roberts <braddr puremagic.com> wrote:
 Sounds to me like LDC is already ahead of clang's C++.
 I actually asked the same question over on the list "could it be that
 LDC is already the most advanced compiler availble on the LLVM
 platform?"  One guy answered "No, there's llvm-g++", but another guy
 answered "it depends on whether you count llvm-g++ as an LLVM-based
 compiler or not".    I'm not sure what llvm-g++ is, but from that I'm
 guessing maybe it's an llvm front end with a g++ back-end.  In which
 case, I wouldn't really count it.

 But there are a lot of LLVM projects listed here:
 http://llvm.org/ProjectsWithLLVM/
 Maybe one of those is more advanced than LDC, not that "advanced" has
 a very specific meaning anyway.

 LDC should definitely be on that list, though.

 --bb
llvm-gcc and -g++ are the gcc/g++ front ends bolted onto the llvm middle/backends. So in that respect, almost identical to dmd's fe bolted onto llvm. The major difference being that llvm-gcc/g++ are complete (as far as gcc and llvm are complete)
Ah, ok. Thanks for clearing that up. So that means I probably should have been bugging the llvm-g++ guys instead of the clang guys. So what is llvm-g++ doing about exception handling and Windows support? Guess I'll have to go sign up for another mailing list now to find out...
 Since LDC isn't re-implementing the frontend of d, just splicing dmd's
 onto llvm and that clang is still implementing both c and c++, yes, ldc
 is further along in some ways than clang is.  But it's not exactly an
 apples to apples comparison (please pardon the pun).
Got it. --bb
Dec 16 2008
prev sibling parent reply "Bill Baxter" <wbaxter gmail.com> writes:
On Sun, Dec 14, 2008 at 3:07 AM, Jarrett Billingsley
<jarrett.billingsley gmail.com> wrote:
 On Sat, Dec 13, 2008 at 12:55 PM, Jason House
 <jason.james.house gmail.com> wrote:
 Jarrett Billingsley wrote:

 I hope bearophile will eventually understand that DMD is not good at
 optimizing code, and so comparing its output to GCC's is ultimately
 meaningless.
Personally, I appreciate seeing this stuff from bearophile. I use D in ways where speed really does count. One of my draws to D was that it was a systems language that could be faster than something like Java. I also was sick of C++ and its problems, such as code that requires workarounds for compiler bugs or lack of compiler optimization. It's really sad to see D requiring the same kind of stuff. For D to become as mainstream as C++, all of this stuff that bearophile posts must be fixed.
Walter is the only one who can make DMD faster, and I think his time is much better spent on designing and maintaining the language.
I think the point is not to convince Walter to spend time working on DMD's optimizer, but to convince him that the DMD optimizer is hopelessly obsolete and thus should be abandoned in favor of another, like LDC. There's also the 64-bit issue. I don't see Walter ever making the current toolchain 64-bit capable (at least not on Windows). This is going to become an increasingly ridiculous limitation for a supposed "systems programming language" as time marches on. At some point something has to change.
 The reference compiler is just supposed to be _correct_, not necessarily
 _fast_.
Fortunately it's not an either/or situation. If Walter chooses to move the reference compiler to a mainstream compiler infrastructure, then *he* can work on making the reference compiler correct, while many *other people* (including many who don't know anything about D) work on making the compiler fast.
 If Walter spent all his time working on making the the DMDFE
 optimizer better and making DMD backend produce faster code, he
 wouldn't have time to work on the language anymore,
Agreed. That would be like putting lipstick on the proverbial pig.
 and it would be
 duplicated effort since GDC and LDC already do it better.
I guess it's possible to imagine a world where Walter cranks out DMDFE code coupled to a sub-par DMD backend that no one uses, since everyone has moved on to LDC or something. But why go there? LDC is completely open source. There's no reason the reference D compiler can't also be the fast D compiler. And become more open in the process, too. That reference compiler / fast compiler dichotomy might have been ok for C++ back in the old "cfront" days, but in those days people everywhere were dying for something a little more high-level than C. Today they aren't. In those days the big corps took notice of C++ and most vendors were maintaining their own cfront-based compilers for their own platforms with their own custom back-end optimizations. There's nothing like that happening with D today. Today the big corps have C++ and if that's not high-level enough then they have 32-dozen scripting languages and VM hosted byte-compiled languages to choose from. So for a niche language like D, making the default compiler be a sucky compiler is very bad marketing in my opinion. And talk about duplicating efforts -- every time Walter releases a new reference compiler, the developers on the fast compiler have to scramble to incorporate those changes, when they could be working on bug fixes and other useful performance improvements. And downstream bugfixes is another area of duplicated efforts -- already LDC developers have fixed various bugs in the DMDFE, and these must then be posted to bugzilla for Walter to eventually put back into his version of DMDFE. That said, LDC isn't quite there yet, especially on Windows, but it would be very encouraging to see Walter take at least a little interest in it. The transition would be a little painful for a while, but much less painful than trying to write a new back end from scratch, and in the end I believe it would make D a much more viable platform going forward. --bb
Dec 13 2008
next sibling parent reply Don <nospam nospam.com> writes:
Bill Baxter wrote:
 On Sun, Dec 14, 2008 at 3:07 AM, Jarrett Billingsley
 <jarrett.billingsley gmail.com> wrote:
 On Sat, Dec 13, 2008 at 12:55 PM, Jason House
 <jason.james.house gmail.com> wrote:
 Jarrett Billingsley wrote:

 I hope bearophile will eventually understand that DMD is not good at
 optimizing code, and so comparing its output to GCC's is ultimately
 meaningless.
Personally, I appreciate seeing this stuff from bearophile. I use D in ways where speed really does count. One of my draws to D was that it was a systems language that could be faster than something like Java. I also was sick of C++ and its problems, such as code that requires workarounds for compiler bugs or lack of compiler optimization. It's really sad to see D requiring the same kind of stuff. For D to become as mainstream as C++, all of this stuff that bearophile posts must be fixed.
Walter is the only one who can make DMD faster, and I think his time is much better spent on designing and maintaining the language.
I think the point is not to convince Walter to spend time working on DMD's optimizer, but to convince him that the DMD optimizer is hopelessly obsolete and thus should be abandoned in favor of another, like LDC. There's also the 64-bit issue. I don't see Walter ever making the current toolchain 64-bit capable (at least not on Windows). This is going to become an increasingly ridiculous limitation for a supposed "systems programming language" as time marches on. At some point something has to change.
 The reference compiler is just supposed to be _correct_, not necessarily
 _fast_.
Fortunately it's not an either/or situation. If Walter chooses to move the reference compiler to a mainstream compiler infrastructure, then *he* can work on making the reference compiler correct, while many *other people* (including many who don't know anything about D) work on making the compiler fast.
 If Walter spent all his time working on making the the DMDFE
 optimizer better and making DMD backend produce faster code, he
 wouldn't have time to work on the language anymore,
Agreed. That would be like putting lipstick on the proverbial pig.
 and it would be
 duplicated effort since GDC and LDC already do it better.
I guess it's possible to imagine a world where Walter cranks out DMDFE code coupled to a sub-par DMD backend that no one uses, since everyone has moved on to LDC or something. But why go there? LDC is completely open source. There's no reason the reference D compiler can't also be the fast D compiler. And become more open in the process, too. That reference compiler / fast compiler dichotomy might have been ok for C++ back in the old "cfront" days, but in those days people everywhere were dying for something a little more high-level than C. Today they aren't. In those days the big corps took notice of C++ and most vendors were maintaining their own cfront-based compilers for their own platforms with their own custom back-end optimizations. There's nothing like that happening with D today. Today the big corps have C++ and if that's not high-level enough then they have 32-dozen scripting languages and VM hosted byte-compiled languages to choose from. So for a niche language like D, making the default compiler be a sucky compiler is very bad marketing in my opinion. And talk about duplicating efforts -- every time Walter releases a new reference compiler, the developers on the fast compiler have to scramble to incorporate those changes, when they could be working on bug fixes and other useful performance improvements. And downstream bugfixes is another area of duplicated efforts -- already LDC developers have fixed various bugs in the DMDFE, and these must then be posted to bugzilla for Walter to eventually put back into his version of DMDFE. That said, LDC isn't quite there yet, especially on Windows, but it would be very encouraging to see Walter take at least a little interest in it. The transition would be a little painful for a while, but much less painful than trying to write a new back end from scratch, and in the end I believe it would make D a much more viable platform going forward. --bb
After having seen GDC fail to live up to expectations and become abandonware, it's unsurprising that Walter's unwilling to invest any emotional energy into LDC just yet. In six months the story may be completely different.
Dec 13 2008
parent "Bill Baxter" <wbaxter gmail.com> writes:
On Sun, Dec 14, 2008 at 4:41 AM, Don <nospam nospam.com> wrote:
 Bill Baxter wrote:
 That said, LDC isn't quite there yet, especially on Windows, but it
 would be very encouraging to see Walter take at least a little
 interest in it.  The transition would be a little painful for a while,
 but much less painful than trying to write a new back end from
 scratch, and in the end I believe it would make D a much more viable
 platform going forward.

 --bb
After having seen GDC fail to live up to expectations and become abandonware, it's unsurprising that Walter's unwilling to invest any emotional energy into LDC just yet. In six months the story may be completely different.
I think licensing issues were a serious issue with Walter moving DMD over to GDC. But let's say they weren't and Walter had moved DMD over to GDC when Dave was still working actively on it. If that had happened, then today we'd have a GDC-based DMD compiler that Walter maintained by himself, BUT which can benefit from all the non-D developers who work on GCC's back end. Compared with the situation today, which is that Walter maintains DMD by himself, and *nobody* works on the back end, and nobody even has the access to work on the back end, since it is closed source. So, even given that Dave has abandoned GDC, that still sounds better to me. --bb
Dec 13 2008
prev sibling parent reply Jason House <jason.james.house gmail.com> writes:
Bill Baxter wrote:

 I think the point is not to convince Walter to spend time working on
 DMD's optimizer, but to convince him that the DMD optimizer is
 hopelessly obsolete and thus should be abandoned in favor of another,
 like LDC.  There's also the 64-bit issue.  I don't see Walter ever
 making the current toolchain 64-bit capable (at least not on Windows).
  This is going to become an increasingly ridiculous limitation for a
 supposed "systems programming language" as time marches on.
 
 At some point something has to change.
 
 The reference compiler is just supposed to be _correct_, not necessarily
 _fast_.
Fortunately it's not an either/or situation. If Walter chooses to move the reference compiler to a mainstream compiler infrastructure, then *he* can work on making the reference compiler correct, while many *other people* (including many who don't know anything about D) work on making the compiler fast.
 If Walter spent all his time working on making the the DMDFE
 optimizer better and making DMD backend produce faster code, he
 wouldn't have time to work on the language anymore,
Agreed. That would be like putting lipstick on the proverbial pig.
 and it would be
 duplicated effort since GDC and LDC already do it better.
I guess it's possible to imagine a world where Walter cranks out DMDFE code coupled to a sub-par DMD backend that no one uses, since everyone has moved on to LDC or something. But why go there? LDC is completely open source. There's no reason the reference D compiler can't also be the fast D compiler. And become more open in the process, too. That reference compiler / fast compiler dichotomy might have been ok for C++ back in the old "cfront" days, but in those days people everywhere were dying for something a little more high-level than C. Today they aren't. In those days the big corps took notice of C++ and most vendors were maintaining their own cfront-based compilers for their own platforms with their own custom back-end optimizations. There's nothing like that happening with D today. Today the big corps have C++ and if that's not high-level enough then they have 32-dozen scripting languages and VM hosted byte-compiled languages to choose from. So for a niche language like D, making the default compiler be a sucky compiler is very bad marketing in my opinion. And talk about duplicating efforts -- every time Walter releases a new reference compiler, the developers on the fast compiler have to scramble to incorporate those changes, when they could be working on bug fixes and other useful performance improvements. And downstream bugfixes is another area of duplicated efforts -- already LDC developers have fixed various bugs in the DMDFE, and these must then be posted to bugzilla for Walter to eventually put back into his version of DMDFE. That said, LDC isn't quite there yet, especially on Windows, but it would be very encouraging to see Walter take at least a little interest in it. The transition would be a little painful for a while, but much less painful than trying to write a new back end from scratch, and in the end I believe it would make D a much more viable platform going forward. --bb
I couldn't agree more! I never understood why people were so anti-gdc. I would not be surprised to hear that the gdc developer(s) stopped after hearing just how little people appreciated their hard work.
Dec 13 2008
next sibling parent "Bill Baxter" <wbaxter gmail.com> writes:
On Sun, Dec 14, 2008 at 9:15 AM, Jason House
<jason.james.house gmail.com> wrote:
 I couldn't agree more!

 I never understood why people were so anti-gdc.  I would not be surprised to
hear that the gdc developer(s) stopped after hearing just how little people
appreciated their hard work.
Well, I think it has more to do with the secretive way in which gdc was developed. I don't know that it was intentionally so, but I read through the old NG messages once from back when Dave first announced it, and he always kept things very close to the chest from the very beginning. Others were apparently working on a GCC-based port of D at the same time and going back and forth in the NG about how to get things working, when Dave popped in and said "I have ported D to GCC". I have no reason to believe he was intentionally trying to keep people away from helping him, but he's never shown much interest in collaborating as far as I recall. Some people just prefer to work alone. On the other hand LDC already has multiple contributors and has been developed in an open and welcoming way from the very beginning. --bb
Dec 13 2008
prev sibling parent reply dsimcha <dsimcha yahoo.com> writes:
== Quote from Jason House (jason.james.house gmail.com)'s article
 I couldn't agree more!
 I never understood why people were so anti-gdc.  I would not be surprised to
hear that the gdc developer(s) stopped after hearing just how little people appreciated their hard work. Well, GDC hasn't released an update since the Stone Age. A few days ago, the first checkins in months took place. I still don't know whether the project is moribund or why else it might be so far behind the curve. My hope is that the GDC people are just waiting for the dust to settle a little on D2, rather than maintaining a moving target.
Dec 13 2008
parent reply Lars Ivar Igesund <larsivar igesund.net> writes:
dsimcha wrote:

 == Quote from Jason House (jason.james.house gmail.com)'s article
 I couldn't agree more!
 I never understood why people were so anti-gdc.  I would not be surprised
 to
hear that the gdc developer(s) stopped after hearing just how little people appreciated their hard work. Well, GDC hasn't released an update since the Stone Age. A few days ago, the first checkins in months took place. I still don't know whether the project is moribund or why else it might be so far behind the curve. My hope is that the GDC people are just waiting for the dust to settle a little on D2, rather than maintaining a moving target.
Related to the commit the other day is this post I made: http://www.dsource.org/projects/tango/forums/topic/664 I think Arthur intend to have something posted on D.announce too. -- Lars Ivar Igesund blog at http://larsivi.net DSource, #d.tango & #D: larsivi Dancing the Tango
Dec 14 2008
parent reply "Bill Baxter" <wbaxter gmail.com> writes:
On Sun, Dec 14, 2008 at 7:23 PM, Lars Ivar Igesund <larsivar igesund.net> wrote:
 dsimcha wrote:

 == Quote from Jason House (jason.james.house gmail.com)'s article
 I couldn't agree more!
 I never understood why people were so anti-gdc.  I would not be surprised
 to
hear that the gdc developer(s) stopped after hearing just how little people appreciated their hard work. Well, GDC hasn't released an update since the Stone Age. A few days ago, the first checkins in months took place. I still don't know whether the project is moribund or why else it might be so far behind the curve. My hope is that the GDC people are just waiting for the dust to settle a little on D2, rather than maintaining a moving target.
Related to the commit the other day is this post I made: http://www.dsource.org/projects/tango/forums/topic/664 I think Arthur intend to have something posted on D.announce too.
So who is this Aurthur and what connection does he have to GDC? Is this a new fork of GDC? Also from the forum post it wasn't clear to me if this was actually about GDC contributions or contributions to Tango to make it work with GDC. (I was going to ask on the tango forum but for some odd reason the thread is locked.) --bb
Dec 14 2008
parent Lars Ivar Igesund <larsivar igesund.net> writes:
Bill Baxter wrote:

 On Sun, Dec 14, 2008 at 7:23 PM, Lars Ivar Igesund <larsivar igesund.net>
 wrote:
 dsimcha wrote:

 == Quote from Jason House (jason.james.house gmail.com)'s article
 I couldn't agree more!
 I never understood why people were so anti-gdc.  I would not be
 surprised to
hear that the gdc developer(s) stopped after hearing just how little people appreciated their hard work. Well, GDC hasn't released an update since the Stone Age. A few days ago, the first checkins in months took place. I still don't know whether the project is moribund or why else it might be so far behind the curve. My hope is that the GDC people are just waiting for the dust to settle a little on D2, rather than maintaining a moving target.
Related to the commit the other day is this post I made: http://www.dsource.org/projects/tango/forums/topic/664 I think Arthur intend to have something posted on D.announce too.
So who is this Aurthur and what connection does he have to GDC? Is this a new fork of GDC? Also from the forum post it wasn't clear to me if this was actually about GDC contributions or contributions to Tango to make it work with GDC. (I was going to ask on the tango forum but for some odd reason the thread is locked.) --bb
Sorry, I clarified that it was about GDC contributions. Arthur is Debian's GDC mantainer, and as far as I know, the only person beyond David with commit access to GDC. The forum is locked because it is what provides the list on the front page with items, and as it is, the forum doesn't allow to only lock for new threads. Hopefully that can be added to the software later on. -- Lars Ivar Igesund blog at http://larsivi.net DSource, #d.tango & #D: larsivi Dancing the Tango
Dec 14 2008
prev sibling next sibling parent reply Walter Bright <newshound1 digitalmars.com> writes:
Jarrett Billingsley wrote:
 I hope bearophile will eventually understand that DMD is not good at
 optimizing code, and so comparing its output to GCC's is ultimately
 meaningless.
The long arithmetic benchmark is completely (and I mean completely) dominated by the time spent in the long divide helper function. The timing results for it really have nothing to do with the compiler optimizer or code generator. Reducing the number of instructions in the loop by one or improving pairing slightly does nothing when stacked up against maybe 50 instructions in the long divide helper function. The long divide helper dmd uses (phobos\internal\llmath.d) is code I basically wrote 25 years ago and have hardly looked at since except to carry it forward. It uses the classic shift-and-subtract algorithm, but there are better ways to do it now with the x86 instruction set. Time to have some fun doing hand-coded assembler again! Fixing this should bring that loop timing up to par, but it's still not a good benchmark for a code generator. Coming up with good *code generator* benchmarks is hard, and really can't be done without looking at the assembler output to make sure that what you think is happening is what is actually happening. I've seen a lot of benchmarks over the years, and too many of them do things like measure malloc() or printf() speed instead of loop optimizations or other intended measurements. Caching and alignment issues can also dominate the results. I haven't looked closely at the other loop yet.
Dec 14 2008
parent reply Jason House <jason.james.house gmail.com> writes:
I have already hit long division related speed issues in my D code. Sometimes
simple things can dominate a benchmark, but those same simple things can
dominate user code too!

Walter Bright Wrote:

 Jarrett Billingsley wrote:
 I hope bearophile will eventually understand that DMD is not good at
 optimizing code, and so comparing its output to GCC's is ultimately
 meaningless.
The long arithmetic benchmark is completely (and I mean completely) dominated by the time spent in the long divide helper function. The timing results for it really have nothing to do with the compiler optimizer or code generator. Reducing the number of instructions in the loop by one or improving pairing slightly does nothing when stacked up against maybe 50 instructions in the long divide helper function. The long divide helper dmd uses (phobos\internal\llmath.d) is code I basically wrote 25 years ago and have hardly looked at since except to carry it forward. It uses the classic shift-and-subtract algorithm, but there are better ways to do it now with the x86 instruction set. Time to have some fun doing hand-coded assembler again! Fixing this should bring that loop timing up to par, but it's still not a good benchmark for a code generator. Coming up with good *code generator* benchmarks is hard, and really can't be done without looking at the assembler output to make sure that what you think is happening is what is actually happening. I've seen a lot of benchmarks over the years, and too many of them do things like measure malloc() or printf() speed instead of loop optimizations or other intended measurements. Caching and alignment issues can also dominate the results. I haven't looked closely at the other loop yet.
Dec 14 2008
parent reply Walter Bright <newshound1 digitalmars.com> writes:
Jason House wrote:
 I have already hit long division related speed issues in my D code.
 Sometimes simple things can dominate a benchmark, but those same
 simple things can dominate user code too!
I completely agree, and I'm in the process of fixing the long division. My point was it has nothing to do with the code generator, and that drawing conclusions from a benchmark result can be tricky.
Dec 14 2008
parent "Bill Baxter" <wbaxter gmail.com> writes:
On Mon, Dec 15, 2008 at 2:13 PM, Walter Bright
<newshound1 digitalmars.com> wrote:
 Jason House wrote:
 I have already hit long division related speed issues in my D code.
 Sometimes simple things can dominate a benchmark, but those same
 simple things can dominate user code too!
I completely agree, and I'm in the process of fixing the long division. My point was it has nothing to do with the code generator, and that drawing conclusions from a benchmark result can be tricky.
That was fast! http://www.dsource.org/projects/phobos/changeset/884 --bb
Dec 14 2008
prev sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
Jarrett Billingsley wrote:
 On Sat, Dec 13, 2008 at 11:16 AM, Tomas Lindquist Olsen
 <tomas famolsen.dk> wrote:
 I tried this out with Tango + DMD 1.033, Tango + LDC r847 and GCC 4.3.2, my
 timings are as follows, best of three:

 $ dmd bench.d -O -release -inline
 long arith:  55630 ms
 nested loop:  5090 ms


 $ ldc bench.d -O3 -release -inline
 long arith:  13870 ms
 nested loop:   120 ms


 $ gcc bench.c -O3 -s -fomit-frame-pointer
 long arith: 13600 ms
 nested loop:  170 ms


 My cpu is: Athlon64 X2 3800+
Go LDC! I hope bearophile will eventually understand that DMD is not good at optimizing code, and so comparing its output to GCC's is ultimately meaningless.
I must have missed the memo. How is dmd not good at optimizing code? Without knowing many details about it, my understanding is that dmd performs common optimization reasonably well and that this particular problem has to do with the long division routine. Andrei
Dec 15 2008
parent reply "Bill Baxter" <wbaxter gmail.com> writes:
On Tue, Dec 16, 2008 at 11:09 AM, Andrei Alexandrescu
<SeeWebsiteForEmail erdani.org> wrote:
 Jarrett Billingsley wrote:
 On Sat, Dec 13, 2008 at 11:16 AM, Tomas Lindquist Olsen
 <tomas famolsen.dk> wrote:
 I tried this out with Tango + DMD 1.033, Tango + LDC r847 and GCC 4.3.2,
 my
 timings are as follows, best of three:

 $ dmd bench.d -O -release -inline
 long arith:  55630 ms
 nested loop:  5090 ms


 $ ldc bench.d -O3 -release -inline
 long arith:  13870 ms
 nested loop:   120 ms


 $ gcc bench.c -O3 -s -fomit-frame-pointer
 long arith: 13600 ms
 nested loop:  170 ms


 My cpu is: Athlon64 X2 3800+
Go LDC! I hope bearophile will eventually understand that DMD is not good at optimizing code, and so comparing its output to GCC's is ultimately meaningless.
I must have missed the memo. How is dmd not good at optimizing code? Without knowing many details about it, my understanding is that dmd performs common optimization reasonably well and that this particular problem has to do with the long division routine.
It's pretty well proven that for floating point code, DMD tends to generate code about 50% slower than GCC. --bb
Dec 15 2008
parent reply "Denis Koroskin" <2korden gmail.com> writes:
On Tue, 16 Dec 2008 05:28:16 +0300, Bill Baxter <wbaxter gmail.com> wrote:

 On Tue, Dec 16, 2008 at 11:09 AM, Andrei Alexandrescu
 <SeeWebsiteForEmail erdani.org> wrote:
 Jarrett Billingsley wrote:
 On Sat, Dec 13, 2008 at 11:16 AM, Tomas Lindquist Olsen
 <tomas famolsen.dk> wrote:
 I tried this out with Tango + DMD 1.033, Tango + LDC r847 and GCC  
 4.3.2,
 my
 timings are as follows, best of three:

 $ dmd bench.d -O -release -inline
 long arith:  55630 ms
 nested loop:  5090 ms


 $ ldc bench.d -O3 -release -inline
 long arith:  13870 ms
 nested loop:   120 ms


 $ gcc bench.c -O3 -s -fomit-frame-pointer
 long arith: 13600 ms
 nested loop:  170 ms


 My cpu is: Athlon64 X2 3800+
Go LDC! I hope bearophile will eventually understand that DMD is not good at optimizing code, and so comparing its output to GCC's is ultimately meaningless.
I must have missed the memo. How is dmd not good at optimizing code? Without knowing many details about it, my understanding is that dmd performs common optimization reasonably well and that this particular problem has to do with the long division routine.
It's pretty well proven that for floating point code, DMD tends to generate code about 50% slower than GCC. --bb
But other than that it is pretty good. And man, it is so fast!
Dec 15 2008
parent reply "Bill Baxter" <wbaxter gmail.com> writes:
On Tue, Dec 16, 2008 at 12:00 PM, Denis Koroskin <2korden gmail.com> wrote:
 On Tue, 16 Dec 2008 05:28:16 +0300, Bill Baxter <wbaxter gmail.com> wrote:

 On Tue, Dec 16, 2008 at 11:09 AM, Andrei Alexandrescu
 <SeeWebsiteForEmail erdani.org> wrote:
 Jarrett Billingsley wrote:
 On Sat, Dec 13, 2008 at 11:16 AM, Tomas Lindquist Olsen
 <tomas famolsen.dk> wrote:
 I tried this out with Tango + DMD 1.033, Tango + LDC r847 and GCC
 4.3.2,
 my
 timings are as follows, best of three:

 $ dmd bench.d -O -release -inline
 long arith:  55630 ms
 nested loop:  5090 ms


 $ ldc bench.d -O3 -release -inline
 long arith:  13870 ms
 nested loop:   120 ms


 $ gcc bench.c -O3 -s -fomit-frame-pointer
 long arith: 13600 ms
 nested loop:  170 ms


 My cpu is: Athlon64 X2 3800+
Go LDC! I hope bearophile will eventually understand that DMD is not good at optimizing code, and so comparing its output to GCC's is ultimately meaningless.
I must have missed the memo. How is dmd not good at optimizing code? Without knowing many details about it, my understanding is that dmd performs common optimization reasonably well and that this particular problem has to do with the long division routine.
It's pretty well proven that for floating point code, DMD tends to generate code about 50% slower than GCC. --bb
But other than that it is pretty good.
Yep, it's more than 100x faster than a straightforward Python ports of similar code, for instance. (I did some benchmarking using a D port of the Laplace solver here http://www.scipy.org/PerformancePython -- I think bearophile did these comparisons again himself more recently, too). There I saw DMD about 50% slower than g++. But I've seen figures in the neighborhood of 50% come up a few times since then in other float-intensive benchmarks, like the raytracer that someone ported from c++. So it is certainly fast. But one of the draws of D is precisely that, that it is fast. If you're after code that runs as fast as possible, 50% slower than the competition is plenty justification for to go look elsewhere for your high-performance language. A 50% hit may not really be relevant at the end of the day, but I know I used to avoid g++ like the plague because even it's output isn't that fast compared to MSVC++ or Intel's compiler, even though the difference is maybe only 10% or so. I was working on interactive fluid simulation, so I wanted every bit of speed I could get out of the processor. With interactive stuff, a 10% difference really can matter, I think.
 And man, it is so fast!
You mean compile times? --bb
Dec 15 2008
parent "Denis Koroskin" <2korden gmail.com> writes:
On Tue, 16 Dec 2008 06:23:14 +0300, Bill Baxter <wbaxter gmail.com> wrote:

 On Tue, Dec 16, 2008 at 12:00 PM, Denis Koroskin <2korden gmail.com>  
 wrote:
 On Tue, 16 Dec 2008 05:28:16 +0300, Bill Baxter <wbaxter gmail.com>  
 wrote:

 On Tue, Dec 16, 2008 at 11:09 AM, Andrei Alexandrescu
 <SeeWebsiteForEmail erdani.org> wrote:
 Jarrett Billingsley wrote:
 On Sat, Dec 13, 2008 at 11:16 AM, Tomas Lindquist Olsen
 <tomas famolsen.dk> wrote:
 I tried this out with Tango + DMD 1.033, Tango + LDC r847 and GCC
 4.3.2,
 my
 timings are as follows, best of three:

 $ dmd bench.d -O -release -inline
 long arith:  55630 ms
 nested loop:  5090 ms


 $ ldc bench.d -O3 -release -inline
 long arith:  13870 ms
 nested loop:   120 ms


 $ gcc bench.c -O3 -s -fomit-frame-pointer
 long arith: 13600 ms
 nested loop:  170 ms


 My cpu is: Athlon64 X2 3800+
Go LDC! I hope bearophile will eventually understand that DMD is not good at optimizing code, and so comparing its output to GCC's is ultimately meaningless.
I must have missed the memo. How is dmd not good at optimizing code? Without knowing many details about it, my understanding is that dmd performs common optimization reasonably well and that this particular problem has to do with the long division routine.
It's pretty well proven that for floating point code, DMD tends to generate code about 50% slower than GCC. --bb
But other than that it is pretty good.
Yep, it's more than 100x faster than a straightforward Python ports of similar code, for instance. (I did some benchmarking using a D port of the Laplace solver here http://www.scipy.org/PerformancePython -- I think bearophile did these comparisons again himself more recently, too). There I saw DMD about 50% slower than g++. But I've seen figures in the neighborhood of 50% come up a few times since then in other float-intensive benchmarks, like the raytracer that someone ported from c++. So it is certainly fast. But one of the draws of D is precisely that, that it is fast. If you're after code that runs as fast as possible, 50% slower than the competition is plenty justification for to go look elsewhere for your high-performance language. A 50% hit may not really be relevant at the end of the day, but I know I used to avoid g++ like the plague because even it's output isn't that fast compared to MSVC++ or Intel's compiler, even though the difference is maybe only 10% or so. I was working on interactive fluid simulation, so I wanted every bit of speed I could get out of the processor. With interactive stuff, a 10% difference really can matter, I think.
 And man, it is so fast!
You mean compile times? --bb
Yeah.
Dec 15 2008
prev sibling next sibling parent reply Walter Bright <newshound1 digitalmars.com> writes:
bearophile wrote:
 Timings:
 
 C gcc:
   Long arithmetic: 11.15 s
   Nested Loops: 0.11 s
 
 D dmd:
   Long arithmetic: 63.7 s
   Nested Loops: 6.17 s
I suggest running obj2asm on the resulting obj files and see what the real difference is.
Dec 13 2008
parent bearophile <bearophileHUGS lycos.com> writes:
Walter Bright:
 I suggest running obj2asm on the resulting obj files and see what the 
 real difference is.
I am sorry, I have just started learning X86 asm, I am not much good yet :-) Here you can see the asm from DMD followed by the one from GCC: http://codepad.org/Kjttfq4z Bye, bearophile
Dec 13 2008
prev sibling parent reply bearophile <bearophileHUGS lycos.com> writes:
Tomas Lindquist Olsen:
 ...
 $ dmd bench.d -O -release -inline
 long arith:  55630 ms
 nested loop:  5090 ms
 
 $ ldc bench.d -O3 -release -inline
 long arith:  13870 ms
 nested loop:   120 ms
 
 $ gcc bench.c -O3 -s -fomit-frame-pointer
 long arith: 13600 ms
 nested loop:  170 ms
...
Very nice results. If you have a little more time I have another small C and D benchmark to offer you, to be tested with GCC and LDC. It's the C version of the "nbody" benchmarks of the Shootout, a very close translation to D (file name "nbody_d1.d") and my faster D version (file name "nbody_d2.d") (the faster D version is relative to DMD compiler, of course). I haven't tried LDC yet, so I can't be sure of what the timings will tell. Thank you for your work, bearophile
Dec 14 2008
next sibling parent reply The Anh Tran <trtheanh gmail.com> writes:
bearophile wrote:
 Tomas Lindquist Olsen:
 ...
 $ dmd bench.d -O -release -inline
 long arith:  55630 ms
 nested loop:  5090 ms

 $ ldc bench.d -O3 -release -inline
 long arith:  13870 ms
 nested loop:   120 ms

 $ gcc bench.c -O3 -s -fomit-frame-pointer
 long arith: 13600 ms
 nested loop:  170 ms
 ...
Very nice results. If you have a little more time I have another small C and D benchmark to offer you, to be tested with GCC and LDC. It's the C version of the "nbody" benchmarks of the Shootout, a very close translation to D (file name "nbody_d1.d") and my faster D version (file name "nbody_d2.d") (the faster D version is relative to DMD compiler, of course). I haven't tried LDC yet, so I can't be sure of what the timings will tell. Thank you for your work, bearophile
IMHO, spectralnorm is 'a little bit' better than nbody. :)
Dec 14 2008
parent bearophile <bearophileHUGS lycos.com> writes:
The Anh Tran:
 IMHO, spectralnorm is 'a little bit' better than nbody.
 :)
No, here I'd like to see a benchmark on LDC. For it I think 'nbody' is the best (and I think the second best is 'recursive'). Bye, bearophile
Dec 14 2008
prev sibling parent reply bearophile <bearophileHUGS lycos.com> writes:
Lindquist and another gentle person on IRC have given their timings relative to
the D and C versions of the 'nbody' code I have shown in the attach in the
precedent email (they have tested the first D code version only).

Timings N=20_000_000, on an athlon64 x2 3800+ CPU:
  gcc: 10.8  s
  ldc: 14.2  s
  dmd: 15.5  s
  gdc:

------------

Timings N=10_000_000, on an AMD 2500+ CPU:
  gcc:  8.78 s
  ldc: 12.26 s
  dmd: 13.9  s
  gdc:  9.82 s

Compiler arguments used on the AMD 2500+ CPU:
  GCC: -O3 -s -fomit-frame-pointer
  DMD: -release -O
  GDC: -O3 -s -fomit-frame-pointer
  LDC: -ofmain -O3 -release -inline

This time the results seems good enough to me.

This benchmark is relative to FP computations, the faster language for this
naive physics simulation is Fortran90, as can be seen in the later pages of the
Shootout).
(I'd like to test one last one, the 'recursive' benchmark, but it's for later).

Bye,
bearophile
Dec 14 2008
parent bearophile <bearophileHUGS lycos.com> writes:
(The other gentle person on IRC was wilsonk).
The timing results for the nbody benchmark (the code is in attach in one my
last posts) as found by 
wilsonk on IRC, N=10_000_000, on an AMD 2500+ CPU:
  64-bit GCC C code: 3.31 s
  64-bit LDC D code: 5.74 s
  
You can see the ratio is very similar to the 32 bit one (but absolute timings
are quite lower).

------------------------

Then the timings for the recursive4 benchmark (the code is in attach in this
post):
On an AMD 2500+ CPU, by wilsonk, 64 bit timings, recursive4:
  C code GCC, N=13: 22.93 s
  D code LDC, N=13: 28.88 s


Timings by Elrood, recursive4 benchmark, on a 32-bit WinXP, AMD x2 3600 CPU:
  C code GCC, N=13: ~25 s
  D code LDC, N=13: >60 s

For this benchmark the LLVM shows to need some improvement still :-)

Bye,
bearophile
Dec 14 2008