www.digitalmars.com         C & C++   DMDScript  

digitalmars.D.learn - Garbage collection in D

reply Diwaker Gupta <diwaker floatingsun.net> writes:
I've just started to play around with D, and I'm hoping someone can clarify
this. I wrote a very simple program that just allocates lots of objects, in
order to benchmark the garbage collector in D. For comparison, I wrote the
programs in C++, Java and D:
C++: http://gist.github.com/122708
Java: http://gist.github.com/122709
D: http://gist.github.com/121790

With an iteration count of 99999999, I get the following numbers:
JAVA:
0:01.60 elapsed, 1.25 user, 0.28 system
C++:
0:04.99 elapsed, 4.97 user, 0.00 system
D:
0:25.28 elapsed, 25.22 user, 0.00 system

As you can see, D is abysmally slow compared to C++ and Java. This is using the
GNU gdc compiler. I'm hoping the community can give me some insight on what is
going on.

Thanks,
Diwaker
Jun 02 2009
next sibling parent Tim Matthews <tim.matthews7 gmail.com> writes:
Diwaker Gupta wrote:
 I've just started to play around with D, and I'm hoping someone can clarify
this. I wrote a very simple program that just allocates lots of objects, in
order to benchmark the garbage collector in D. For comparison, I wrote the
programs in C++, Java and D:
 C++: http://gist.github.com/122708
 Java: http://gist.github.com/122709
 D: http://gist.github.com/121790
 
 With an iteration count of 99999999, I get the following numbers:
 JAVA:
 0:01.60 elapsed, 1.25 user, 0.28 system
 C++:
 0:04.99 elapsed, 4.97 user, 0.00 system
 D:
 0:25.28 elapsed, 25.22 user, 0.00 system
 
 As you can see, D is abysmally slow compared to C++ and Java. This is using
the GNU gdc compiler. I'm hoping the community can give me some insight on what
is going on.
 
 Thanks,
 Diwaker
Can someone try dmd and ldc as gdc is dying (if not already dead). Also code in the iterations and take out the command line reading and printing so to make it accurate.
Jun 02 2009
prev sibling next sibling parent reply Jarrett Billingsley <jarrett.billingsley gmail.com> writes:
On Tue, Jun 2, 2009 at 8:40 PM, Diwaker Gupta <diwaker floatingsun.net> wro=
te:
 I've just started to play around with D, and I'm hoping someone can clari=
fy this. I wrote a very simple program that just allocates lots of objects,= in order to benchmark the garbage collector in D. For comparison, I wrote = the programs in C++, Java and D:
 C++: http://gist.github.com/122708
 Java: http://gist.github.com/122709
 D: http://gist.github.com/121790

 With an iteration count of 99999999, I get the following numbers:
 JAVA:
 0:01.60 elapsed, 1.25 user, 0.28 system
 C++:
 0:04.99 elapsed, 4.97 user, 0.00 system
 D:
 0:25.28 elapsed, 25.22 user, 0.00 system

 As you can see, D is abysmally slow compared to C++ and Java. This is usi=
ng the GNU gdc compiler. I'm hoping the community can give me some insight = on what is going on. D's GC is not nearly as well-developed as that of Java's, and its performance is not that stellar. Sorry, but you are not the first to discover this by any stretch of the imagination. (On a side note, I have a feeling you and bearophile will get on famously.) Also, benchmarking a GC against manual memory management doesn't do much for you. It's apples and oranges. Though it is funny to see how badly Java beats C++ there.
Jun 02 2009
next sibling parent BCS <none anon.com> writes:
Hello Jarrett,

 On Tue, Jun 2, 2009 at 8:40 PM, Diwaker Gupta
 <diwaker floatingsun.net> wrote:
 
 I've just started to play around with D, and I'm hoping someone can
 clarify this. I wrote a very simple program that just allocates lots
 of objects, in order to benchmark the garbage collector in D. For
 comparison, I wrote the programs in C++, Java and D:
 
 C++: http://gist.github.com/122708
 
 Java: http://gist.github.com/122709
 
 D: http://gist.github.com/121790
 
 With an iteration count of 99999999, I get the following numbers:
 JAVA:
 0:01.60 elapsed, 1.25 user, 0.28 system
 C++:
 0:04.99 elapsed, 4.97 user, 0.00 system
 D:
 0:25.28 elapsed, 25.22 user, 0.00 system
 As you can see, D is abysmally slow compared to C++ and Java. This is
 using the GNU gdc compiler. I'm hoping the community can give me some
 insight on what is going on.
 
D's GC is not nearly as well-developed as that of Java's, and its performance is not that stellar. Sorry, but you are not the first to discover this by any stretch of the imagination. (On a side note, I have a feeling you and bearophile will get on famously.) Also, benchmarking a GC against manual memory management doesn't do much for you. It's apples and oranges. Though it is funny to see how badly Java beats C++ there.
Java may be able to tell that the allocation never needs to be kept and is just reusing the same space on the stack. Heck it might even not be doing that as you can tell that the class only ever holds the same value as i so it might just be skipping the new all together.
Jun 02 2009
prev sibling parent reply bearophile <bearophileHUGS lycos.com> writes:
Jarrett Billingsley:
 (On a side note, I have a feeling you and bearophile will get on famously.)<
I have found a new friend ;-) Some timings, usual settings, Core2 2 GHz: Timings, N=100_000_000, Windows, seconds: D 1: 40.20 DMD D 2: 21.83 DMD D 2: 18.80 DMD, struct + scope C++: 18.06 D 1: 8.47 DMD D 2: 7.41 DMD + scope Java 1.78 -server Java: 1.44 Timings, N=100_000_000, Pubuntu, seconds: D 1: 25.7 LDC C++: 6.87 D 1: 2.67 LDC + scope Java: 1.49 Poor LDC :-) Bye, bearophile
Jun 03 2009
parent reply bearophile <bearophileHUGS lycos.com> writes:
I have tried the new JavaVM on Win, that optionally performs escape analysis,
and the results are nice:

Timings, N=100_000_000, Windows, seconds:
  D 1:  40.20 DMD
  D 2:  21.83 DMD
  D 2:  18.80 DMD, struct + scope
  C++:  18.06
  D 1:   8.47 DMD
  D 2:   7.41 DMD + scope
  Java:  1.84 V.1.6.0_14, -XX:+UnlockExperimentalVMOptions -XX:+UseG1GC
  Java   1.78 -server
  Java:  1.44
  Java:  1.38 V.1.6.0_14
  Java:  0.28 V.1.6.0_14, -server -XX:+DoEscapeAnalysis
  
Timings, N=100_000_000, Pubuntu, seconds:
  D 1:  25.7  LDC
  C++:   6.87
  D 1:   2.67 LDC + scope
  Java:  1.49

Bye,
bearophile
Jun 03 2009
next sibling parent reply Robert Fraser <fraserofthenight gmail.com> writes:
What's the difference between:
   D 1:  40.20 DMD
   D 2:  21.83 DMD
   D 2:  18.80 DMD, struct + scope
and:
   D 1:   8.47 DMD
   D 2:   7.41 DMD + scope
...?
Jun 03 2009
parent bearophile <bearophileHUGS lycos.com> writes:
Robert Fraser:

 What's the difference between:
   D 1:  40.20 DMD
   D 2:  21.83 DMD
That's the standard code.
 and:
   D 1:   8.47 DMD
   D 2:   7.41 DMD + scope
They are both with scope, on D1 and D2. Sorry for my small omission. Bye, bearophile
Jun 03 2009
prev sibling parent reply Sam Hu <samhudotsamhu gmail.com> writes:
bearophile Wrote:

 I have tried the new JavaVM on Win, that optionally performs escape analysis,
and the results are nice:
 
 Timings, N=100_000_000, Windows, seconds:
   D 1:  40.20 DMD
   D 2:  21.83 DMD
   D 2:  18.80 DMD, struct + scope
   C++:  18.06
   D 1:   8.47 DMD
   D 2:   7.41 DMD + scope
   Java:  1.84 V.1.6.0_14, -XX:+UnlockExperimentalVMOptions -XX:+UseG1GC
   Java   1.78 -server
   Java:  1.44
   Java:  1.38 V.1.6.0_14
   Java:  0.28 V.1.6.0_14, -server -XX:+DoEscapeAnalysis
   
 Timings, N=100_000_000, Pubuntu, seconds:
   D 1:  25.7  LDC
   C++:   6.87
   D 1:   2.67 LDC + scope
   Java:  1.49
 
 Bye,
 bearophile
Sorry for my stepping in... What does this result mean?Does it mean D is slower than Java and C++ is also slower than Java?Or that's true just under certain circumstance? I am really confused and really appreicate if any further explanation. Regards, Sam
Jun 03 2009
next sibling parent reply Robert Fraser <fraserofthenight gmail.com> writes:
Sam Hu wrote:
 bearophile Wrote:
 
 I have tried the new JavaVM on Win, that optionally performs escape analysis,
and the results are nice:

 Timings, N=100_000_000, Windows, seconds:
   D 1:  40.20 DMD
   D 2:  21.83 DMD
   D 2:  18.80 DMD, struct + scope
   C++:  18.06
   D 1:   8.47 DMD
   D 2:   7.41 DMD + scope
   Java:  1.84 V.1.6.0_14, -XX:+UnlockExperimentalVMOptions -XX:+UseG1GC
   Java   1.78 -server
   Java:  1.44
   Java:  1.38 V.1.6.0_14
   Java:  0.28 V.1.6.0_14, -server -XX:+DoEscapeAnalysis
   
 Timings, N=100_000_000, Pubuntu, seconds:
   D 1:  25.7  LDC
   C++:   6.87
   D 1:   2.67 LDC + scope
   Java:  1.49

 Bye,
 bearophile
Sorry for my stepping in... What does this result mean?Does it mean D is slower than Java and C++ is also slower than Java?Or that's true just under certain circumstance? I am really confused and really appreicate if any further explanation. Regards, Sam
It suggests that for dynamic allocation of many small objects via "new", Java is an order of magnitude faster than C++, which in turn is slightly faster than D.
Jun 03 2009
parent reply bearophile <bearophileHUGS lycos.com> writes:
Rainer Deyke:

The slow aspects of this garbage collector (detection and preservation) aren't
really tested by this benchmark.<
In practice most times real-world Java programs show a good enough performance even taking in account detection and preservation too.
These are the timings without dynamic memory allocation:
   D 1:   8.47 DMD [+ scope]
   D 2:   7.41 DMD + scope
   Java:  0.28 V.1.6.0_14, -server -XX:+DoEscapeAnalysis
It's not exactly the same, because in that Java code I have used a program-wide optimization flag (that I guess will become default), while in D I have had to add a "scope" everywhere, and I think adding "scope" is less safe than letting the compiler perform an escape analysis. So I am tempted to put the 0.28 seconds result among the dynamic allocation timings, even if technically it is not, because for the programmer the program "feels" and looks and acts like a dynamic allocation, it's just faster :-) In the end what counts is how well the programs runs after the compiler has done its work.
D's performance is unexpectedly bad, so much that I expect that it might be
using dynamic memory allocation anyway despite the 'scope' keyword. Java is
clever in that it eliminates unnecessary dynamic memory allocations
automatically.<
I think Java here is doing a bit more than just removing the dynamic allocation. I don't think D (compiled with LDC) is doing doing any allocation here. I'll ask to the LDC IRC channel. I'll also take a look at the asm generated by the JavaVM (it's not handy to find the asm generated by the JVM, you need to install a debug version of it... how stupid). ------------------------- Sam Hu:
I am sorry to hear that,really,really sorry.<
Wait, things may not be that bad. And even if they are bad, the developers of the LDC compiler may find ways to improve the situation. ------------------------- Robert Fraser:
It suggests that for dynamic allocation of many small objects via "new", Java
is an order of magnitude faster than C++, which in turn is slightly faster than
D.<
Yes, for such tiny benchmarks I have seen several times 10-12 higher allocation performance in Java compared to D1-DMD. But real programs don't use all their time allocating and freeing memory... Bye, bearophile
Jun 04 2009
next sibling parent reply Frits van Bommel <fvbommel REMwOVExCAPSs.nl> writes:
bearophile wrote:
 Rainer Deyke:
 D's performance is unexpectedly bad, so much that I expect that it might be
using dynamic memory allocation anyway despite the 'scope' keyword. Java is
clever in that it eliminates unnecessary dynamic memory allocations
automatically.<
I think Java here is doing a bit more than just removing the dynamic allocation. I don't think D (compiled with LDC) is doing doing any allocation here. I'll ask to the LDC IRC channel.
LDC actually still does a dynamic allocation there because it doesn't eliminate dynamic allocations in loops. This is unfortunate, but I haven't yet had the time to figure out how to get the optimization passes to prove the allocation can't be live when reached again. (If multiple instances of memory allocated at the same allocation site may be reachable at the same time, it's not safe to use a stack allocation instead of a heap allocation) It's on my to-do list, though.
Jun 04 2009
parent reply bearophile <bearophileHUGS lycos.com> writes:
Frits van Bommel:
 LDC actually still does a dynamic allocation there because it doesn't
eliminate 
 dynamic allocations in loops.
I have compiled the loop in foo() with LDC: class AllocationItem { int value; this(int v) { this.value = v; } } int foo(int iters) { int sum = 0; for (int i = 0; i < iters; ++i) { scope auto item = new AllocationItem(i); sum += item.value; } return sum; } The asm of the core of the loop: .LBB2_2: movl $_D11gc_test2b_d14AllocationItem6__vtblZ, 8(%esp) movl $0, 12(%esp) movl %edi, 16(%esp) movl %ebx, (%esp) call _d_callfinalizer incl %edi cmpl %esi, %edi jne .LBB2_2 I can see a call to finalizer, but not the allocation?
 This is unfortunate, but I haven't yet had the time to figure out how to get
the 
 optimization passes to prove the allocation can't be live when reached again. 
 (If multiple instances of memory allocated at the same allocation site may be 
 reachable at the same time, it's not safe to use a stack allocation instead of
a 
 heap allocation)
The new JavaVM with the option I have shown is clearly able to do such things. Can't you take a look at the source code of the JavaVM? :-) There's a huge amount of NIH in the open source :-) Bye, bearophile
Jun 04 2009
next sibling parent reply Frits van Bommel <fvbommel REMwOVExCAPSs.nl> writes:
bearophile wrote:
 Frits van Bommel:
 LDC actually still does a dynamic allocation there because it doesn't
eliminate 
 dynamic allocations in loops.
I have compiled the loop in foo() with LDC:
[snip]
         scope auto item = new AllocationItem(i);
[snip]
 
 The asm of the core of the loop:
 
 .LBB2_2:
     movl    $_D11gc_test2b_d14AllocationItem6__vtblZ, 8(%esp)
     movl    $0, 12(%esp)
     movl    %edi, 16(%esp)
     movl    %ebx, (%esp)
     call    _d_callfinalizer
     incl    %edi
     cmpl    %esi, %edi
     jne .LBB2_2
 
 I can see a call to finalizer, but not the allocation?
Sorry, I thought we were talking about the code without 'scope'. Of course the class is indeed stack-allocated if you use scope. (The following:
 This is unfortunate, but I haven't yet had the time to figure out how to get
the 
 optimization passes to prove the allocation can't be live when reached again. 
 (If multiple instances of memory allocated at the same allocation site may be 
 reachable at the same time, it's not safe to use a stack allocation instead of
a 
 heap allocation)
only applies when 'scope' was not used, and the compiler therefore initially heap-allocated it)
 The new JavaVM with the option I have shown is clearly able to do such things.
 Can't you take a look at the source code of the JavaVM? :-)
 There's a huge amount of NIH in the open source :-)
I suspect the Java VM uses a different internal representation of the code than LLVM does...
Jun 04 2009
parent Robert Fraser <fraserofthenight gmail.com> writes:
Frits van Bommel wrote:
 The new JavaVM with the option I have shown is clearly able to do such 
 things.
 Can't you take a look at the source code of the JavaVM? :-)
 There's a huge amount of NIH in the open source :-)
I suspect the Java VM uses a different internal representation of the code than LLVM does...
HotSpot uses 3-argument SSA for IR, AFAIK... I think LLVM is also SSA-based, right? But the Java source is _quite_ complex.
Jun 04 2009
prev sibling parent bearophile <bearophileHUGS lycos.com> writes:
LDC is a moving target because it's actively developed, and generally things
improve with time.
This is a recent change by the quite active Frits van Bommel:
http://www.dsource.org/projects/ldc/changeset/1486%3A9ed0695cb93c

This is a cleaned up version discussed in this thread:

import tango.stdc.stdio: printf;
import Integer = tango.text.convert.Integer;

class AllocationItem {
    int value;
    this(int v) { this.value = v; }
}

int foo(int iters) {
    int sum = 0;
    for (int i = 0; i < iters; ++i) {
        auto item = new AllocationItem(i);
        sum += item.value;
    }
    return sum;
}

void main(char[][] args) {
    int iters = Integer.parse(args[1]);
    printf("%d\n", foo(iters));        
}


The asm generated by the last LDC (based on DMD v1.045 and llvm 2.6svn (Tue Jun
 9 22:34:25 2009)) (this is just the important part of the asm):

foo:
	testl	%eax, %eax
	jle	.LBB2_4
	movl	%eax, %ecx
	xorl	%eax, %eax
	.align	16
.LBB2_2:
	incl	%eax
	cmpl	%ecx, %eax
	jne	.LBB2_2
	leal	-2(%ecx), %eax
	leal	-1(%ecx), %edx
	mull	%edx
	shldl	$31, %eax, %edx
	leal	-1(%edx,%ecx), %eax
	ret
.LBB2_4:
	xorl	%eax, %eax
	ret
*/


This is the same code with "scope" added:

import tango.stdc.stdio: printf;
import Integer = tango.text.convert.Integer;

class AllocationItem {
    int value;
    this(int v) { this.value = v; }
}

int foo(int iters) {
    int sum = 0;
    for (int i = 0; i < iters; ++i) {
        scope auto item = new AllocationItem(i);
        sum += item.value;
    }
    return sum;
}

void main(char[][] args) {
    int iters = Integer.parse(args[1]);
    printf("%d\n", foo(iters));        
}

Its asm:

/*
foo:
	pushl	%ebx
	pushl	%edi
	pushl	%esi
	subl	$24, %esp
	testl	%eax, %eax
	jle	.LBB2_4
	movl	%eax, %esi
	xorl	%edi, %edi
	leal	8(%esp), %ebx
	.align	16
.LBB2_2:
	movl	$_D11gc_test2b_d14AllocationItem6__vtblZ, 8(%esp)
	movl	$0, 12(%esp)
	movl	%edi, 16(%esp)
	movl	%ebx, (%esp)
	call	_d_callfinalizer
	incl	%edi
	cmpl	%esi, %edi
	jne	.LBB2_2
	leal	-2(%esi), %eax
	leal	-1(%esi), %ecx
	mull	%ecx
	shldl	$31, %eax, %edx
	leal	-1(%edx,%esi), %eax
	jmp	.LBB2_5
.LBB2_4:
	xorl	%eax, %eax
.LBB2_5:
	addl	$24, %esp
	popl	%esi
	popl	%edi
	popl	%ebx
	ret
*/


The running time:
...$ elaps ./gc_test1 250000000
-1782069568
real	0m0.170s
user	0m0.160s
sys	0m0.010s

The version with "scope":
...$ elaps ./gc_test2 250000000
-1782069568
real	0m6.430s
user	0m6.430s
sys	0m0.000s

(Later I may try again with a less simple and more realistic benchmark, because
this is too much a toy to be interesting.)

Bye,
bearophile
Jun 10 2009
prev sibling parent reply Robert Fraser <fraserofthenight gmail.com> writes:
bearophile wrote:
 Yes, for such tiny benchmarks I have seen several times 10-12 higher
allocation performance in Java compared to D1-DMD. But real programs don't use
all their time allocating and freeing memory...
 
 Bye,
 bearophile
For the compiler I'm working on now (in D), I wanted to check the affects of allocation on performance. Using a placement new, the time for lex/parse/semantic/type-infer/codegen (on a really huge in-memory file) went from ~6 seconds to ~4 seconds (don't have the exact timings, and can't repro right now since I'm redoing inference). So I'd say that even in real-world applications, these things have an effect. Of course, this only applies to programs which allocate and throw away a programming models, much less so by, say, C++'s.
Jun 04 2009
parent bearophile <bearophileHUGS lycos.com> writes:
Robert Fraser:
 Of course, this only applies to programs which allocate and throw away a 

 programming models, much less so by, say, C++'s.
Right, there are many potential new D programmers coming from Java that may want to use that style that relies a lot on an efficient GC. But you are missing another important style of programming that allocates & frees tons of objects: functional-style programming, where immutable data is the norm. If the D2 language will want to appeal to functional programmers it will have to manage such immutables more efficiently. Bye, bearophile
Jun 04 2009
prev sibling parent reply Rainer Deyke <rainerd eldwood.com> writes:
Sam Hu wrote:
 What does this result mean?Does it mean D is slower than Java and C++
 is also slower than Java?Or that's true just under  certain
 circumstance? I am really confused and really appreicate if any
 further explanation.
These are the timings for using dynamic memory allocation:
   D 1:  40.20 DMD
   D 2:  21.83 DMD
   C++:  18.06
   Java:  1.38 V.1.6.0_14
Java is the fastest by a large margin because it has the benefit of a moving garbage collector. This means allocation is a simple pointer bump and deallocation is completely free. The slow aspects of this garbage collector (detection and preservation) aren't really tested by this benchmark. These are the timings without dynamic memory allocation:
   D 1:   8.47 DMD [+ scope]
   D 2:   7.41 DMD + scope
   Java:  0.28 V.1.6.0_14, -server -XX:+DoEscapeAnalysis
D's performance is unexpectedly bad, so much that I expect that it might be using dynamic memory allocation anyway despite the 'scope' keyword. Java is clever in that it eliminates unnecessary dynamic memory allocations automatically. C++ is notable absent, but I fully expect it to outperform Java by a significant margin. -- Rainer Deyke - rainerd eldwood.com
Jun 03 2009
parent Sam Hu <samhudotsamhu gmail.com> writes:
 D's performance is unexpectedly bad, so much that I expect that it might
 be using dynamic memory allocation anyway despite the 'scope' keyword.
I am sorry to hear that,really,really sorry.
Jun 03 2009
prev sibling next sibling parent reply Robert Clipsham <robert octarineparrot.com> writes:
Diwaker Gupta wrote:
 I've just started to play around with D, and I'm hoping someone can clarify
this. I wrote a very simple program that just allocates lots of objects, in
order to benchmark the garbage collector in D. For comparison, I wrote the
programs in C++, Java and D:
 C++: http://gist.github.com/122708
 Java: http://gist.github.com/122709
 D: http://gist.github.com/121790
 
 With an iteration count of 99999999, I get the following numbers:
 JAVA:
 0:01.60 elapsed, 1.25 user, 0.28 system
 C++:
 0:04.99 elapsed, 4.97 user, 0.00 system
 D:
 0:25.28 elapsed, 25.22 user, 0.00 system
 
 As you can see, D is abysmally slow compared to C++ and Java. This is using
the GNU gdc compiler. I'm hoping the community can give me some insight on what
is going on.
 
 Thanks,
 Diwaker
After porting the D version to tango: D: 6.282s (ldmd -O5 -inline -release -L-s -singleobj gctest.d) C++: 4.435s (g++ -O5 gctest.d) This is on a C2D 2.2Ghz, 2GB RAM, Linux x86-64. I don't have java installed, so can't test that. Maybe if you're planning to use the GC a lot you should consider using tango?
Jun 03 2009
parent reply Robert Clipsham <robert octarineparrot.com> writes:
Robert Clipsham wrote:
 After porting the D version to tango:
 
 D: 6.282s (ldmd -O5 -inline -release -L-s -singleobj gctest.d)
 C++: 4.435s (g++ -O5 gctest.d)
 
 This is on a C2D 2.2Ghz, 2GB RAM, Linux x86-64. I don't have java 
 installed, so can't test that. Maybe if you're planning to use the GC a 
 lot you should consider using tango?
After reading TSalm's post, I reran the D version with the scope keyword at line 16: D (with scope): 1.098s D: 6.282s C++: 4.435s It seems by using scope and tango you can easily compete with C++.
Jun 03 2009
parent Rainer Deyke <rainerd eldwood.com> writes:
Robert Clipsham wrote:
 After reading TSalm's post, I reran the D version with the scope keyword
 at line 16:
 
 D (with scope): 1.098s
 D: 6.282s
 C++: 4.435s
 
 It seems by using scope and tango you can easily compete with C++.
'scope' eliminates dynamic memory allocation. At this point you're not measuring the speed of the garbage collector at all. For a fair comparison, you should also eliminate the useless dynamic memory allocation from the C++ version. -- Rainer Deyke - rainerd eldwood.com
Jun 03 2009
prev sibling next sibling parent TSalm <TSalm free.fr> writes:
Le Wed, 03 Jun 2009 02:40:11 +0200, Diwaker Gupta  
<diwaker floatingsun.net> a écrit:

 I've just started to play around with D, and I'm hoping someone can  
 clarify this. I wrote a very simple program that just allocates lots of  
 objects, in order to benchmark the garbage collector in D. For  
 comparison, I wrote the programs in C++, Java and D:
 C++: http://gist.github.com/122708
 Java: http://gist.github.com/122709
 D: http://gist.github.com/121790

 With an iteration count of 99999999, I get the following numbers:
 JAVA:
 0:01.60 elapsed, 1.25 user, 0.28 system
 C++:
 0:04.99 elapsed, 4.97 user, 0.00 system
 D:
 0:25.28 elapsed, 25.22 user, 0.00 system
I think the line 14 in the D source is useless. On my linux system : D with line 14 removed : ---------------------- tsalm fgabriel:~/dev/DBenchmark$ time ./Benchmark allocations 99999999 787459713 real 0m28.779s user 0m28.778s sys 0m0.004s C++: --- tsalm fgabriel:~/dev/DBenchmark$ time ./a.out allocations 99999999 Ran 99999999 allocations of RunAllocations. Final value: 787459713 real 0m16.406s user 0m16.405s sys 0m0.004s Java : ----- tsalm fgabriel:~/dev/DBenchmark$ time java Benchmark allocations 99999999 Ran 99999999 allocations of RunAllocations. Final value: 787459713 real 0m6.679s user 0m6.408s sys 0m0.248s But with the use of "scope" keyword at line 13 of the D source : tsalm fgabriel:~/dev/DBenchmark$ time ./Benchmark allocations 99999999 787459713 real 0m10.752s user 0m10.753s sys 0m0.000s
Jun 03 2009
prev sibling next sibling parent Aelx <aelxx yandex.ru> writes:
Diwaker Gupta Wrote:

 I've just started to play around with D, and I'm hoping someone can clarify
this. I wrote a very simple program that just allocates lots of objects, in
order to benchmark the garbage collector in D. For comparison, I wrote the
programs in C++, Java and D:
 C++: http://gist.github.com/122708
 Java: http://gist.github.com/122709
 D: http://gist.github.com/121790
 
 With an iteration count of 99999999, I get the following numbers:
 JAVA:
 0:01.60 elapsed, 1.25 user, 0.28 system
 C++:
 0:04.99 elapsed, 4.97 user, 0.00 system
 D:
 0:25.28 elapsed, 25.22 user, 0.00 system
 
 As you can see, D is abysmally slow compared to C++ and Java. This is using
the GNU gdc compiler. I'm hoping the community can give me some insight on what
is going on.
 
 Thanks,
 Diwaker
Hi. Inspired by this idea I changed somehow rules to make it more complicated task. So they are: 1. every "AllocationItem" has references to three other items. 2. generate "n_items" in static array "items" of type "AllocationItem" with "value" field set to "0" 3. make random connections between all this items by their reference fields 4. iterate "n_iters" times with the following agorithm a) create new "AllocationArray" with "value" set to "1" b) replace random item from the array "items" with this new item c) add connections to this item d) remove (variant 1) or change (variant 2) three random connections e) now if some object isn't referenced by others it should be removed (GC collected). 5. calculate count of old items (with "value" set to 0) and new ones Here are my programs in D and Java. There is no C++ variant, sorry. In D I used modified for D2 Bill Baxter's weak reference module: http://www.dsource.org/projects/scrapple/browser/trunk/weakref now results: 1) it works strange, as from time to time it gives different results in D2 (approx. 1 in 10) and it's no by RNG. 2) java's version is awfully slow (may be because it's my second java app, first was 8 jears ago). now I hate java even more. 3) java and D give different results. It's all strange. Maybe I made something wrong.
Jul 25 2009
prev sibling parent Aelx <aelxx yandex.ru> writes:
Oh my. I forgot programs, here they are:
http://gist.github.com/154958
Jul 25 2009