www.digitalmars.com         C & C++   DMDScript  

digitalmars.D - DMD compilation speed

reply "Martin Krejcirik" <mk-junk i-line.cz> writes:
It seems like every DMD release makes compilation slower. This 
time I see 10.8s vs 7.8s on my little project. I know this is 
generally least of concern, and D1's lighting-fast times are long 
gone, but since Walter often claims D's superior compilation 
speeds, maybe some profiling is in order ?
Mar 29 2015
next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 3/29/2015 4:14 PM, Martin Krejcirik wrote:
 It seems like every DMD release makes compilation slower. This time I see 10.8s
 vs 7.8s on my little project. I know this is generally least of concern, and
 D1's lighting-fast times are long gone, but since Walter often claims D's
 superior compilation speeds, maybe some profiling is in order ?
Sigh. Two things happen constantly: 1. object file sizes creep up 2. compilation speed slows down It's like rust on your car. Fixing it requires constant vigilance.
Mar 29 2015
next sibling parent reply "weaselcat" <weaselcat gmail.com> writes:
On Monday, 30 March 2015 at 00:12:09 UTC, Walter Bright wrote:
 On 3/29/2015 4:14 PM, Martin Krejcirik wrote:
 It seems like every DMD release makes compilation slower. This 
 time I see 10.8s
 vs 7.8s on my little project. I know this is generally least 
 of concern, and
 D1's lighting-fast times are long gone, but since Walter often 
 claims D's
 superior compilation speeds, maybe some profiling is in order ?
Sigh. Two things happen constantly: 1. object file sizes creep up 2. compilation speed slows down It's like rust on your car. Fixing it requires constant vigilance.
would having benchmarks help keep this under control/make regressions easier to find?
Mar 29 2015
parent Walter Bright <newshound2 digitalmars.com> writes:
On 3/29/2015 5:14 PM, weaselcat wrote:
 would having benchmarks help keep this under control/make regressions easier to
 find?
benchmarks would help.
Mar 29 2015
prev sibling next sibling parent reply "Gary Willoughby" <dev nomad.so> writes:
On Monday, 30 March 2015 at 00:12:09 UTC, Walter Bright wrote:
 On 3/29/2015 4:14 PM, Martin Krejcirik wrote:
 It seems like every DMD release makes compilation slower. This 
 time I see 10.8s
 vs 7.8s on my little project. I know this is generally least 
 of concern, and
 D1's lighting-fast times are long gone, but since Walter often 
 claims D's
 superior compilation speeds, maybe some profiling is in order ?
Sigh. Two things happen constantly: 1. object file sizes creep up 2. compilation speed slows down It's like rust on your car. Fixing it requires constant vigilance.
Are there any plans to fix this up in a point release? The compile times have really taken a nose dive in v2.067. It's really taken the fun out of the language.
Apr 09 2015
parent reply Martin Nowak <code+news.digitalmars dawg.eu> writes:
On 04/09/2015 03:41 PM, Gary Willoughby wrote:
 Are there any plans to fix this up in a point release? The compile times
 have really taken a nose dive in v2.067. It's really taken the fun out
 of the language.
Filed a bug report, we'll figure something out. https://issues.dlang.org/show_bug.cgi?id=14431
Apr 09 2015
parent "Gary Willoughby" <dev nomad.so> writes:
On Friday, 10 April 2015 at 02:02:17 UTC, Martin Nowak wrote:
 On 04/09/2015 03:41 PM, Gary Willoughby wrote:
 Are there any plans to fix this up in a point release? The 
 compile times
 have really taken a nose dive in v2.067. It's really taken the 
 fun out
 of the language.
Filed a bug report, we'll figure something out. https://issues.dlang.org/show_bug.cgi?id=14431
Cheers.
Apr 10 2015
prev sibling parent "John Colvin" <john.loughran.colvin gmail.com> writes:
On Monday, 30 March 2015 at 00:12:09 UTC, Walter Bright wrote:
 On 3/29/2015 4:14 PM, Martin Krejcirik wrote:
 It seems like every DMD release makes compilation slower. This 
 time I see 10.8s
 vs 7.8s on my little project. I know this is generally least 
 of concern, and
 D1's lighting-fast times are long gone, but since Walter often 
 claims D's
 superior compilation speeds, maybe some profiling is in order ?
Sigh. Two things happen constantly: 1. object file sizes creep up 2. compilation speed slows down It's like rust on your car. Fixing it requires constant vigilance.
I just did some profiling of building phobos. I noticed ~20% of the runtime and ~40% of the L2 cache misses were in slist_reset. Is this expected?
Apr 09 2015
prev sibling next sibling parent reply Martin Nowak <code+news.digitalmars dawg.eu> writes:
On 03/30/2015 01:14 AM, Martin Krejcirik wrote:
 It seems like every DMD release makes compilation slower. This time I
 see 10.8s vs 7.8s on my little project. I know this is generally least
 of concern, and D1's lighting-fast times are long gone, but since Walter
 often claims D's superior compilation speeds, maybe some profiling is in
 order ?
25% slowdown is severe, can you share the project and probably file a bug report?
Mar 30 2015
parent reply "Martin Krejcirik" <mk-junk i-line.cz> writes:
Here is one example:

Orange d5b2e0127c67f50bd885ee43a7dd61dd418b1661
https://github.com/jacob-carlborg/orange.git
make

2.065.0
real    0m9.028s
user    0m7.972s
sys     0m0.940s

2.066.1
real    0m10.796s
user    0m9.629s
sys     0m1.056s

2.067.0
real    0m13.543s
user    0m12.097s
sys     0m1.348s
Mar 30 2015
parent Jacob Carlborg <doob me.com> writes:
On 2015-03-30 18:09, Martin Krejcirik wrote:
 Here is one example:

 Orange d5b2e0127c67f50bd885ee43a7dd61dd418b1661
 https://github.com/jacob-carlborg/orange.git
 make

 2.065.0
 real    0m9.028s
 user    0m7.972s
 sys     0m0.940s

 2.066.1
 real    0m10.796s
 user    0m9.629s
 sys     0m1.056s

 2.067.0
 real    0m13.543s
 user    0m12.097s
 sys     0m1.348s
These are the timings for compiling the unit tests without linking. It passes all the files to DMD in one command. The make file invokes DMD once per file. 1.076 real 0m0.212s user 0m0.187s sys 0m0.022s 2.065.0 real 0m0.426s user 0m0.357s sys 0m0.065s 2.066.1 real 0m0.470s user 0m0.397s sys 0m0.064s 2.067.0 real 0m0.510s user 0m0.435s sys 0m0.074s It might not be fair to compare with D1 since it's not exactly the same code. -- /Jacob Carlborg
Mar 30 2015
prev sibling next sibling parent Mathias Lang via Digitalmars-d <digitalmars-d puremagic.com> writes:
Is it only DMD compile time or DMD + ld ? ld can be very slow sometimes.

2015-03-30 1:14 GMT+02:00 Martin Krejcirik via Digitalmars-d <
digitalmars-d puremagic.com>:

 It seems like every DMD release makes compilation slower. This time I see
 10.8s vs 7.8s on my little project. I know this is generally least of
 concern, and D1's lighting-fast times are long gone, but since Walter often
 claims D's superior compilation speeds, maybe some profiling is in order ?
Mar 30 2015
prev sibling parent reply "lobo" <swamplobo gmail.com> writes:
On Sunday, 29 March 2015 at 23:14:31 UTC, Martin Krejcirik wrote:
 It seems like every DMD release makes compilation slower. This 
 time I see 10.8s vs 7.8s on my little project. I know this is 
 generally least of concern, and D1's lighting-fast times are 
 long gone, but since Walter often claims D's superior 
 compilation speeds, maybe some profiling is in order ?
I'm finding memory usage the biggest problem for me. 3s speed increase is not nice but an increase of 500MB RAM usage with DMD 2.067 over 2.066 means I can no longer build one of my projects. bye, lobo
Mar 30 2015
next sibling parent reply "lobo" <swamplobo gmail.com> writes:
On Monday, 30 March 2015 at 22:39:51 UTC, lobo wrote:
 On Sunday, 29 March 2015 at 23:14:31 UTC, Martin Krejcirik 
 wrote:
 It seems like every DMD release makes compilation slower. This 
 time I see 10.8s vs 7.8s on my little project. I know this is 
 generally least of concern, and D1's lighting-fast times are 
 long gone, but since Walter often claims D's superior 
 compilation speeds, maybe some profiling is in order ?
I'm finding memory usage the biggest problem for me. 3s speed increase is not nice but an increase of 500MB RAM usage with DMD 2.067 over 2.066 means I can no longer build one of my projects. bye, lobo
I should add that I am on a 32-bit machine with 4GB RAM. I just ran some tests measuring RAM usage: DMD 2.067 ~4.2GB (fails here so not sure of the full amount required) DMD 2.066 ~3.7GB (maximum) DMD 2.065 ~3.1GB (maximum) It was right on the edge with 2.066 anyway but this trend of more RAM usage seems to also be occurring with each DMD release. bye, lobo
Mar 30 2015
next sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 3/30/15 3:47 PM, lobo wrote:
 On Monday, 30 March 2015 at 22:39:51 UTC, lobo wrote:
 On Sunday, 29 March 2015 at 23:14:31 UTC, Martin Krejcirik wrote:
 It seems like every DMD release makes compilation slower. This time I
 see 10.8s vs 7.8s on my little project. I know this is generally
 least of concern, and D1's lighting-fast times are long gone, but
 since Walter often claims D's superior compilation speeds, maybe some
 profiling is in order ?
I'm finding memory usage the biggest problem for me. 3s speed increase is not nice but an increase of 500MB RAM usage with DMD 2.067 over 2.066 means I can no longer build one of my projects. bye, lobo
I should add that I am on a 32-bit machine with 4GB RAM. I just ran some tests measuring RAM usage: DMD 2.067 ~4.2GB (fails here so not sure of the full amount required) DMD 2.066 ~3.7GB (maximum) DMD 2.065 ~3.1GB (maximum) It was right on the edge with 2.066 anyway but this trend of more RAM usage seems to also be occurring with each DMD release. bye, lobo
Part of our acceptance tests should be peak memory, object file size, executable file size, and run time for building a few test programs (starting with "hello, world"). Any change in these must be investigated, justified, and documented. -- Andrei
Mar 30 2015
parent reply "Vladimir Panteleev" <vladimir thecybershadow.net> writes:
On Monday, 30 March 2015 at 22:51:43 UTC, Andrei Alexandrescu 
wrote:
 Part of our acceptance tests should be peak memory, object file 
 size, executable file size, and run time for building a few 
 test programs (starting with "hello, world"). Any change in 
 these must be investigated, justified, and documented. -- Andrei
I have filed this issue today: https://issues.dlang.org/show_bug.cgi?id=14381
Mar 30 2015
parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 3/30/15 7:41 PM, Vladimir Panteleev wrote:
 On Monday, 30 March 2015 at 22:51:43 UTC, Andrei Alexandrescu wrote:
 Part of our acceptance tests should be peak memory, object file size,
 executable file size, and run time for building a few test programs
 (starting with "hello, world"). Any change in these must be
 investigated, justified, and documented. -- Andrei
I have filed this issue today: https://issues.dlang.org/show_bug.cgi?id=14381
The current situation is a shame. I appreciate the free service we're getting, but sometimes you just can't afford the free stuff. -- Andrei
Mar 30 2015
prev sibling parent reply "Jake The Baker" <Jake TheBaker.com> writes:
On Monday, 30 March 2015 at 22:47:51 UTC, lobo wrote:
 On Monday, 30 March 2015 at 22:39:51 UTC, lobo wrote:
 On Sunday, 29 March 2015 at 23:14:31 UTC, Martin Krejcirik 
 wrote:
 It seems like every DMD release makes compilation slower. 
 This time I see 10.8s vs 7.8s on my little project. I know 
 this is generally least of concern, and D1's lighting-fast 
 times are long gone, but since Walter often claims D's 
 superior compilation speeds, maybe some profiling is in order 
 ?
I'm finding memory usage the biggest problem for me. 3s speed increase is not nice but an increase of 500MB RAM usage with DMD 2.067 over 2.066 means I can no longer build one of my projects. bye, lobo
I should add that I am on a 32-bit machine with 4GB RAM. I just ran some tests measuring RAM usage: DMD 2.067 ~4.2GB (fails here so not sure of the full amount required) DMD 2.066 ~3.7GB (maximum) DMD 2.065 ~3.1GB (maximum) It was right on the edge with 2.066 anyway but this trend of more RAM usage seems to also be occurring with each DMD release. bye, lobo
As far as memory is concerned. How hard would it be to simply have DMD use a swap file? This would fix the out of memory issues and provide some safety(at least you can get your project to compile. Seems like it would be a relatively simple thing to add?
Mar 31 2015
next sibling parent reply "Adam D. Ruppe" <destructionator gmail.com> writes:
On Tuesday, 31 March 2015 at 19:20:20 UTC, Jake The Baker wrote:
 As far as memory is concerned. How hard would it be to simply 
 have DMD use a swap file?
That'd hit the same walls as the operating system trying to use a swap file at least - running out of address space, and being brutally slow even if it does keep running.
Mar 31 2015
parent reply "Jake The Baker" <Jake TheBaker.com> writes:
On Tuesday, 31 March 2015 at 19:27:35 UTC, Adam D. Ruppe wrote:
 On Tuesday, 31 March 2015 at 19:20:20 UTC, Jake The Baker wrote:
 As far as memory is concerned. How hard would it be to simply 
 have DMD use a swap file?
That'd hit the same walls as the operating system trying to use a swap file at least - running out of address space, and being brutally slow even if it does keep running.
I doubt it. If most modules are sparsely used it would improve memory usage in proportion to that. Basically if D would monitor file/module usage and compile areas that are relatively independent it should minimize disk usage. Basically page out stuff you know won't be needed. If it was smart enough it could order the data through module usage and compile the independent ones first, then only the ones that are simple dependencies, etc). The benefits to such a system is that larger projects get the biggest boost(there are more independent modules floating around. Hence at some point it becomes a non-issue.
Mar 31 2015
parent reply "lobo" <swamplobo gmail.com> writes:
On Wednesday, 1 April 2015 at 02:54:48 UTC, Jake The Baker wrote:
 On Tuesday, 31 March 2015 at 19:27:35 UTC, Adam D. Ruppe wrote:
 On Tuesday, 31 March 2015 at 19:20:20 UTC, Jake The Baker 
 wrote:
 As far as memory is concerned. How hard would it be to simply 
 have DMD use a swap file?
That'd hit the same walls as the operating system trying to use a swap file at least - running out of address space, and being brutally slow even if it does keep running.
I doubt it. If most modules are sparsely used it would improve memory usage in proportion to that. Basically if D would monitor file/module usage and compile areas that are relatively independent it should minimize disk usage. Basically page out stuff you know won't be needed. If it was smart enough it could order the data through module usage and compile the independent ones first, then only the ones that are simple dependencies, etc). The benefits to such a system is that larger projects get the biggest boost(there are more independent modules floating around. Hence at some point it becomes a non-issue.
I have no idea what you're talking about here, sorry. I'm compiling modules separately already to object files. I think that helps reduce memory usage but I could be mistaken. I think the main culprit now is my attempts to (ab)use CTFE. After switching to DMD 2.066 I started adding `enum val=f()` where I could. After reading the discussions here I went about reverting most of these back to `auto val=<blah>` and I'm building again :-) DMD 2.067 is now maxing out at ~3.8GB and stable. bye, lobo
Mar 31 2015
parent "Daniel Murphy" <yebbliesnospam gmail.com> writes:
"lobo"  wrote in message news:vydmnbzapttzjnnctizm forum.dlang.org...

 I think the main culprit now is my attempts to (ab)use CTFE. After 
 switching to DMD 2.066 I started adding `enum val=f()` where I could. 
 After reading the discussions here I went about reverting most of these 
 back to `auto val=<blah>` and I'm building again :-)

 DMD 2.067 is now maxing out at ~3.8GB and stable.
Yeah, the big problem is that dmd's interpreter sort of evolved out of the constant folder, and wasn't designed for ctfe. A new interpreter for dmd is one of the projects I hope to get to after DDMD is complete, unless somebody beats me to it.
Apr 01 2015
prev sibling next sibling parent "lobo" <swamplobo gmail.com> writes:
On Tuesday, 31 March 2015 at 19:20:20 UTC, Jake The Baker wrote:
 On Monday, 30 March 2015 at 22:47:51 UTC, lobo wrote:
 On Monday, 30 March 2015 at 22:39:51 UTC, lobo wrote:
 On Sunday, 29 March 2015 at 23:14:31 UTC, Martin Krejcirik 
 wrote:
 It seems like every DMD release makes compilation slower. 
 This time I see 10.8s vs 7.8s on my little project. I know 
 this is generally least of concern, and D1's lighting-fast 
 times are long gone, but since Walter often claims D's 
 superior compilation speeds, maybe some profiling is in 
 order ?
I'm finding memory usage the biggest problem for me. 3s speed increase is not nice but an increase of 500MB RAM usage with DMD 2.067 over 2.066 means I can no longer build one of my projects. bye, lobo
I should add that I am on a 32-bit machine with 4GB RAM. I just ran some tests measuring RAM usage: DMD 2.067 ~4.2GB (fails here so not sure of the full amount required) DMD 2.066 ~3.7GB (maximum) DMD 2.065 ~3.1GB (maximum) It was right on the edge with 2.066 anyway but this trend of more RAM usage seems to also be occurring with each DMD release. bye, lobo
As far as memory is concerned. How hard would it be to simply have DMD use a swap file? This would fix the out of memory issues and provide some safety(at least you can get your project to compile. Seems like it would be a relatively simple thing to add?
It's incredibly slow and unproductive it's not really an option. My primary reason for using D is that I can be as productive as I am in Python but retain the same raw native power of C++. Anyway, it sounds D devs have a few good ideas on how to resolve this. bye, lobo
Mar 31 2015
prev sibling parent "Daniel Murphy" <yebbliesnospam gmail.com> writes:
"Jake The Baker"  wrote in message 
news:bmwxxjmcoszhbexotufx forum.dlang.org...

 As far as memory is concerned. How hard would it be to simply have DMD use 
 a swap file? This would fix the out of memory issues and provide some 
 safety(at least you can get your project to compile. Seems like it would 
 be a relatively simple thing to add?
It seems unlikely that having dmd use its own swap file would perform better than the operating system's implementation.
Apr 01 2015
prev sibling next sibling parent reply "H. S. Teoh via Digitalmars-d" <digitalmars-d puremagic.com> writes:
On Mon, Mar 30, 2015 at 10:39:50PM +0000, lobo via Digitalmars-d wrote:
 On Sunday, 29 March 2015 at 23:14:31 UTC, Martin Krejcirik wrote:
It seems like every DMD release makes compilation slower. This time I
see 10.8s vs 7.8s on my little project. I know this is generally
least of concern, and D1's lighting-fast times are long gone, but
since Walter often claims D's superior compilation speeds, maybe some
profiling is in order ?
I'm finding memory usage the biggest problem for me. 3s speed increase is not nice but an increase of 500MB RAM usage with DMD 2.067 over 2.066 means I can no longer build one of my projects.
[...] Yeah, dmd memory consumption is way off the charts, because under the pretext of compile speed it never frees allocated memory. Unfortunately, the assumption that not managing memory == faster quickly becomes untrue once dmd runs out of RAM and the OS starts thrashing. Compile times quickly skyrocket exponentially as everything gets stuck on I/O. This is one of the big reasons I can't use D on my work PC, because it's an older machine with limited RAM, and when DMD is running the whole box slows down to an unusable crawl. This is not the first time this issue was brought up, but it seems nobody in the compiler team cares enough to do anything about it. :-( T -- Lottery: tax on the stupid. -- Slashdotter
Mar 30 2015
parent reply "w0rp" <devw0rp gmail.com> writes:
On Monday, 30 March 2015 at 22:55:50 UTC, H. S. Teoh wrote:
 On Mon, Mar 30, 2015 at 10:39:50PM +0000, lobo via 
 Digitalmars-d wrote:
 On Sunday, 29 March 2015 at 23:14:31 UTC, Martin Krejcirik 
 wrote:
It seems like every DMD release makes compilation slower. 
This time I
see 10.8s vs 7.8s on my little project. I know this is 
generally
least of concern, and D1's lighting-fast times are long gone, 
but
since Walter often claims D's superior compilation speeds, 
maybe some
profiling is in order ?
I'm finding memory usage the biggest problem for me. 3s speed increase is not nice but an increase of 500MB RAM usage with DMD 2.067 over 2.066 means I can no longer build one of my projects.
[...] Yeah, dmd memory consumption is way off the charts, because under the pretext of compile speed it never frees allocated memory. Unfortunately, the assumption that not managing memory == faster quickly becomes untrue once dmd runs out of RAM and the OS starts thrashing. Compile times quickly skyrocket exponentially as everything gets stuck on I/O. This is one of the big reasons I can't use D on my work PC, because it's an older machine with limited RAM, and when DMD is running the whole box slows down to an unusable crawl. This is not the first time this issue was brought up, but it seems nobody in the compiler team cares enough to do anything about it. :-( T
I sometimes think DMD's memory should be... garbage collected. I used the forbidden phrase! Seriously though, allocating a bunch of memory until you hit some maximum threshold, possibly configured, and freeing unreferenced memory at that point, pausing compilation while that happens? This is GC. I wonder if someone enterprising enough would be willing to try it out with DDMD by swapping malloc calls with calls to D's GC or something.
Mar 30 2015
next sibling parent "deadalnix" <deadalnix gmail.com> writes:
On Monday, 30 March 2015 at 23:28:50 UTC, w0rp wrote:
 I sometimes think DMD's memory should be... garbage collected. 
 I used the forbidden phrase!
Yes, set an initial heap size of 100Mb or something and the GC won't kick in for scripts. Also, free after CTFE !
Mar 30 2015
prev sibling next sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 3/30/15 4:28 PM, w0rp wrote:
 I sometimes think DMD's memory should be... garbage collected. I used
 the forbidden phrase!
Compiler workloads are a good candidate for GC. -- Andrei
Mar 30 2015
parent reply "deadalnix" <deadalnix gmail.com> writes:
On Tuesday, 31 March 2015 at 00:54:08 UTC, Andrei Alexandrescu 
wrote:
 On 3/30/15 4:28 PM, w0rp wrote:
 I sometimes think DMD's memory should be... garbage collected. 
 I used
 the forbidden phrase!
Compiler workloads are a good candidate for GC. -- Andrei
Yes, compiler to perform significantly better with GC than with other memory management strategy. Ironically, I think that weighted a bit too much in favor of GC for language design in the general case.
Mar 30 2015
parent reply Martin Nowak <code+news.digitalmars dawg.eu> writes:
On 03/31/2015 05:51 AM, deadalnix wrote:
 Yes, compiler to perform significantly better with GC than with other
 memory management strategy. Ironically, I think that weighted a bit too
 much in favor of GC for language design in the general case.
Why? Compilers use a lot of long-lived data structures (AST, metadata) which is particularly bad for a conservative GC. Any evidence to the contrary?
Mar 31 2015
parent "deadalnix" <deadalnix gmail.com> writes:
On Tuesday, 31 March 2015 at 19:19:23 UTC, Martin Nowak wrote:
 On 03/31/2015 05:51 AM, deadalnix wrote:
 Yes, compiler to perform significantly better with GC than 
 with other
 memory management strategy. Ironically, I think that weighted 
 a bit too
 much in favor of GC for language design in the general case.
Why? Compilers use a lot of long-lived data structures (AST, metadata) which is particularly bad for a conservative GC. Any evidence to the contrary?
The graph is not acyclic, which makes it even worse for anything else.
Mar 31 2015
prev sibling next sibling parent "weaselcat" <weaselcat gmail.com> writes:
On Monday, 30 March 2015 at 23:28:50 UTC, w0rp wrote:
 Seriously though, allocating a bunch of memory until you hit 
 some maximum threshold, possibly configured, and freeing 
 unreferenced memory at that point, pausing compilation while 
 that happens? This is GC. I wonder if someone enterprising 
 enough would be willing to try it out with DDMD by swapping 
 malloc calls with calls to D's GC or something.
has anyone tried using boehm with dmd? I'm pretty sure it has a way of being LD_PRELOADed to override malloc IIRC.
Mar 30 2015
prev sibling next sibling parent reply "Daniel Murphy" <yebbliesnospam gmail.com> writes:
"w0rp"  wrote in message news:leajtjgremulowqoxqpc forum.dlang.org...

 I sometimes think DMD's memory should be... garbage collected. I used the 
 forbidden phrase!

 Seriously though, allocating a bunch of memory until you hit some maximum 
 threshold, possibly configured, and freeing unreferenced memory at that 
 point, pausing compilation while that happens? This is GC. I wonder if 
 someone enterprising enough would be willing to try it out with DDMD by 
 swapping malloc calls with calls to D's GC or something.
I've used D's GC with DDMD. It works*, but you're trading better memory usage for worse allocation speed. It's quite possible we could add a switch to ddmd to enable the GC. * Well actually it currently segfaults, but not because of anything fundamentally wrong with the approach. After switching to DDMD we will have a HUGE number of options readily available for reducing memory usage, such as using allocation-free range code and enabling the GC.
Mar 30 2015
next sibling parent reply "deadalnix" <deadalnix gmail.com> writes:
On Tuesday, 31 March 2015 at 04:56:13 UTC, Daniel Murphy wrote:
 I've used D's GC with DDMD.  It works*, but you're trading 
 better memory usage for worse allocation speed.  It's quite 
 possible we could add a switch to ddmd to enable the GC.
That is not accurate. For small programs, yes. For anything non trivial, the amount of memory in the working is set become so big that I doubt there is any advantage of doing so.
Mar 30 2015
next sibling parent "Daniel Murphy" <yebbliesnospam gmail.com> writes:
"deadalnix"  wrote in message news:uwajsjgcjtzfeqtqoyjt forum.dlang.org...

 On Tuesday, 31 March 2015 at 04:56:13 UTC, Daniel Murphy wrote:
 I've used D's GC with DDMD.  It works*, but you're trading better memory 
 usage for worse allocation speed.  It's quite possible we could add a 
 switch to ddmd to enable the GC.
That is not accurate. For small programs, yes. For anything non trivial, the amount of memory in the working is set become so big that I doubt there is any advantage of doing so.
I don't see how it's inaccurate. Many projects fit into the range where they do not exhaust physical memory, and the slower allocation speed can really hurt. It's worth noting that 'small' doesn't mean low number of lines of code, but low number of instantiated templates and ctfe calls.
Mar 30 2015
prev sibling parent reply ketmar <ketmar ketmar.no-ip.org> writes:
On Tue, 31 Mar 2015 05:21:13 +0000, deadalnix wrote:

 On Tuesday, 31 March 2015 at 04:56:13 UTC, Daniel Murphy wrote:
 I've used D's GC with DDMD.  It works*, but you're trading better
 memory usage for worse allocation speed.  It's quite possible we could
 add a switch to ddmd to enable the GC.
That is not accurate. For small programs, yes. For anything non trivial, the amount of memory in the working is set become so big that I doubt there is any advantage of doing so.
i think that DDMD can start with GC turned off, and automatically turn it=20 on when RAM consumption goes over 1GB, for example. this way small-sized=20 (and even middle-sized) projects without heavy CTFE will still enjoy=20 "nofree is fast" strategy, and big projects will not eat the whole box'=20 RAM.=
Mar 30 2015
parent reply "Vladimir Panteleev" <vladimir thecybershadow.net> writes:
On Tuesday, 31 March 2015 at 05:42:02 UTC, ketmar wrote:
 i think that DDMD can start with GC turned off, and 
 automatically turn it
 on when RAM consumption goes over 1GB, for example. this way 
 small-sized
 (and even middle-sized) projects without heavy CTFE will still 
 enjoy
 "nofree is fast" strategy, and big projects will not eat the 
 whole box'
 RAM.
Recording the information necessary to free memory costs performance (and more memory) itself. With a basic bump-the-pointer scheme, you don't need to worry about page sizes or free lists or heap fragmentation - all allocated data is contiguous, there is no metadata, and you can't back out of that.
Mar 30 2015
next sibling parent ketmar <ketmar ketmar.no-ip.org> writes:
On Tue, 31 Mar 2015 05:57:45 +0000, Vladimir Panteleev wrote:

 On Tuesday, 31 March 2015 at 05:42:02 UTC, ketmar wrote:
 i think that DDMD can start with GC turned off, and automatically turn
 it on when RAM consumption goes over 1GB, for example. this way
 small-sized (and even middle-sized) projects without heavy CTFE will
 still enjoy "nofree is fast" strategy, and big projects will not eat
 the whole box'
 RAM.
=20 Recording the information necessary to free memory costs performance (and more memory) itself. With a basic bump-the-pointer scheme, you don't need to worry about page sizes or free lists or heap fragmentation - all allocated data is contiguous, there is no metadata, and you can't back out of that.
TANSTAAFL. alas. yet without `free()` there aren't free lists to scan and=20 so on, so it can be almost as fast as bump-the-pointer. the good thing is=20 that user doesn't have to do the work that machine can do for him, i.e.=20 thinking about how to invoke the compiler -- with GC or without GC.=
Mar 31 2015
prev sibling parent reply "Daniel Murphy" <yebbliesnospam gmail.com> writes:
"Vladimir Panteleev"  wrote in message 
news:remgknxogqlfwfnsubce forum.dlang.org...

 Recording the information necessary to free memory costs performance (and 
 more memory) itself. With a basic bump-the-pointer scheme, you don't need 
 to worry about page sizes or free lists or heap fragmentation - all 
 allocated data is contiguous, there is no metadata, and you can't back out 
 of that.
It's possible that we could use a hybrid approach, where a GB or so is allocated from the GC in one chunk, then filled up using a bump-pointer allocator. When that's exhausted, the GC can start being used as normal for the rest of the compilation. The big chunk will obviously never be freed, but the GC still has a good chance to keep memory usage under control. (on 64-bit at least)
Mar 31 2015
parent reply "Temtaime" <temtaime gmail.com> writes:
Is anyone there looked how msvc for example compiles really big 
files ?
I never seen it goes over 200 MB. And it is written in C++, so no 
GC. And compiles very quick.
I think DMD should be refactored and free the memory, pools and 
other techniques.
Mar 31 2015
next sibling parent "Temtaime" <temtaime gmail.com> writes:
*use pools...
Mar 31 2015
prev sibling parent reply ketmar <ketmar ketmar.no-ip.org> writes:
On Tue, 31 Mar 2015 10:14:05 +0000, Temtaime wrote:

 Is anyone there looked how msvc for example compiles really big files ?
 I never seen it goes over 200 MB. And it is written in C++, so no GC.
 And compiles very quick.
and it has no CTFE, so... CTFE is a big black hole that eats memory like crazy.=
Mar 31 2015
parent reply "deadalnix" <deadalnix gmail.com> writes:
On Tuesday, 31 March 2015 at 11:29:23 UTC, ketmar wrote:
 On Tue, 31 Mar 2015 10:14:05 +0000, Temtaime wrote:

 Is anyone there looked how msvc for example compiles really 
 big files ?
 I never seen it goes over 200 MB. And it is written in C++, so 
 no GC.
 And compiles very quick.
and it has no CTFE, so... CTFE is a big black hole that eats memory like crazy.
I'm going to propose again the same thing as in the past : - before CTFE switch pool. - CTFE in the new pool. - deep copy result from ctfe pool to main pool. - ditch ctfe pool.
Mar 31 2015
next sibling parent ketmar <ketmar ketmar.no-ip.org> writes:
On Tue, 31 Mar 2015 18:24:48 +0000, deadalnix wrote:

 On Tuesday, 31 March 2015 at 11:29:23 UTC, ketmar wrote:
 On Tue, 31 Mar 2015 10:14:05 +0000, Temtaime wrote:

 Is anyone there looked how msvc for example compiles really big files
 ?
 I never seen it goes over 200 MB. And it is written in C++, so no GC.
 And compiles very quick.
and it has no CTFE, so... CTFE is a big black hole that eats memory like crazy.
=20 I'm going to propose again the same thing as in the past : - before CTFE switch pool. - CTFE in the new pool. - deep copy result from ctfe pool to main pool. - ditch ctfe pool.
this won't really help long CTFE calls (like building a parser based on=20 grammar, for example, as this is a one very long call). it will slow down=20 simple CFTE calls though. it *may* help, but i'm looking at my "life" samle, for example, and see=20 that it eats all my RAM while parsing big .lif file. it has to do that in=20 one call, as there is no way to enumerate existing files in directory and=20 process them sequentially -- as there is no way to store state between=20 CTFE calls, so i can't even create numbered arrays with data.=
Mar 31 2015
prev sibling next sibling parent "John Colvin" <john.loughran.colvin gmail.com> writes:
On Tuesday, 31 March 2015 at 18:24:49 UTC, deadalnix wrote:
 On Tuesday, 31 March 2015 at 11:29:23 UTC, ketmar wrote:
 On Tue, 31 Mar 2015 10:14:05 +0000, Temtaime wrote:

 Is anyone there looked how msvc for example compiles really 
 big files ?
 I never seen it goes over 200 MB. And it is written in C++, 
 so no GC.
 And compiles very quick.
and it has no CTFE, so... CTFE is a big black hole that eats memory like crazy.
I'm going to propose again the same thing as in the past : - before CTFE switch pool. - CTFE in the new pool. - deep copy result from ctfe pool to main pool. - ditch ctfe pool.
Wait, you mean DMD doesn't already do something like that? Yikes. I had always assumed (without looking) that ctfe used some separate heap that was chucked after each call.
Mar 31 2015
prev sibling parent reply Martin Nowak <code+news.digitalmars dawg.eu> writes:
On 03/31/2015 08:24 PM, deadalnix wrote:
 I'm going to propose again the same thing as in the past :
  - before CTFE switch pool.
  - CTFE in the new pool.
  - deep copy result from ctfe pool to main pool.
  - ditch ctfe pool.
No, it's trivial enough to implement a full AST interpreter. The way it's done currently (using AST nodes as CTFE interpreter values) makes it very hard to use a distinct allocator, because ownership can move from CTFE to compiler and vice versa.
Mar 31 2015
parent "deadalnix" <deadalnix gmail.com> writes:
On Tuesday, 31 March 2015 at 21:53:29 UTC, Martin Nowak wrote:
 On 03/31/2015 08:24 PM, deadalnix wrote:
 I'm going to propose again the same thing as in the past :
  - before CTFE switch pool.
  - CTFE in the new pool.
  - deep copy result from ctfe pool to main pool.
  - ditch ctfe pool.
No, it's trivial enough to implement a full AST interpreter. The way it's done currently (using AST nodes as CTFE interpreter values) makes it very hard to use a distinct allocator, because ownership can move from CTFE to compiler and vice versa.
This is why I introduced a deep copy step in there.
Mar 31 2015
prev sibling parent reply "Random D-user" <no email.com> writes:
 I've used D's GC with DDMD.  It works*, but you're trading 
 better memory usage for worse allocation speed.  It's quite 
 possible we could add a switch to ddmd to enable the GC.
As a random d-user (who cares about perf/speed and just happened to read this) a switch sounds VERY good to me. I don't want to pay the price of GC because of some low-end machines. Memory is really cheap these days and pretty much every machine is 64-bits (even phones are trasitioning fast to 64-bits). Also, I wanted to add that freeing (at least to the OS (does this apply to GC?)) isn't exactly free either. Infact it can be more costly than mallocing. Here's enlightening article: https://randomascii.wordpress.com/2014/12/10/hidden-costs-of-memory-allocation/
Mar 31 2015
next sibling parent reply ketmar <ketmar ketmar.no-ip.org> writes:
On Wed, 01 Apr 2015 02:25:43 +0000, Random D-user wrote:

 GC because of some low-end machines. Memory is really cheap these days
 and pretty much every machine is 64-bits (even phones are trasitioning
 fast to 64-bits).
this is the essense of "modern computing", btw. "hey, we have this=20 resource! hey, we have the only program user will ever want to run, so=20 assume that all that resource is ours! what? just buy a better box!"=
Mar 31 2015
parent reply "weaselcat" <weaselcat gmail.com> writes:
On Wednesday, 1 April 2015 at 04:49:55 UTC, ketmar wrote:
 On Wed, 01 Apr 2015 02:25:43 +0000, Random D-user wrote:

 GC because of some low-end machines. Memory is really cheap 
 these days
 and pretty much every machine is 64-bits (even phones are 
 trasitioning
 fast to 64-bits).
this is the essense of "modern computing", btw. "hey, we have this resource! hey, we have the only program user will ever want to run, so assume that all that resource is ours! what? just buy a better box!"
google/mozilla's developer mantra regarding web browsers.
Mar 31 2015
parent reply "deadalnix" <deadalnix gmail.com> writes:
On Wednesday, 1 April 2015 at 04:51:26 UTC, weaselcat wrote:
 On Wednesday, 1 April 2015 at 04:49:55 UTC, ketmar wrote:
 On Wed, 01 Apr 2015 02:25:43 +0000, Random D-user wrote:

 GC because of some low-end machines. Memory is really cheap 
 these days
 and pretty much every machine is 64-bits (even phones are 
 trasitioning
 fast to 64-bits).
this is the essense of "modern computing", btw. "hey, we have this resource! hey, we have the only program user will ever want to run, so assume that all that resource is ours! what? just buy a better box!"
google/mozilla's developer mantra regarding web browsers.
They must have an agreement with DRAM vendor, I see no other explanation...
Mar 31 2015
parent ketmar <ketmar ketmar.no-ip.org> writes:
On Wed, 01 Apr 2015 06:21:58 +0000, deadalnix wrote:

 On Wednesday, 1 April 2015 at 04:51:26 UTC, weaselcat wrote:
 On Wednesday, 1 April 2015 at 04:49:55 UTC, ketmar wrote:
 On Wed, 01 Apr 2015 02:25:43 +0000, Random D-user wrote:

 GC because of some low-end machines. Memory is really cheap these
 days and pretty much every machine is 64-bits (even phones are
 trasitioning fast to 64-bits).
this is the essense of "modern computing", btw. "hey, we have this resource! hey, we have the only program user will ever want to run, so assume that all that resource is ours! what? just buy a better box!"
google/mozilla's developer mantra regarding web browsers.
=20 They must have an agreement with DRAM vendor, I see no other explanation...
maybe vendors just giving 'em free DRAM chips...=
Apr 01 2015
prev sibling parent "w0rp" <devw0rp gmail.com> writes:
On Wednesday, 1 April 2015 at 02:25:44 UTC, Random D-user wrote:
 I've used D's GC with DDMD.  It works*, but you're trading 
 better memory usage for worse allocation speed.  It's quite 
 possible we could add a switch to ddmd to enable the GC.
As a random d-user (who cares about perf/speed and just happened to read this) a switch sounds VERY good to me. I don't want to pay the price of GC because of some low-end machines. Memory is really cheap these days and pretty much every machine is 64-bits (even phones are trasitioning fast to 64-bits). Also, I wanted to add that freeing (at least to the OS (does this apply to GC?)) isn't exactly free either. Infact it can be more costly than mallocing. Here's enlightening article: https://randomascii.wordpress.com/2014/12/10/hidden-costs-of-memory-allocation/
I think a switch would be good. My main reason for asking for such a thing isn't for performance (not directly), it's for being able to compile some D programs on computers with less memory. I've had machines with 1 or 2 GB of memory on them, wanted to compile a D program, DMD ran out of memory, and the compiler crashed. You can maybe start swapping on disk, but that won't be too great.
Apr 09 2015
prev sibling parent reply Jacob Carlborg <doob me.com> writes:
On 2015-03-31 01:28, w0rp wrote:

 I sometimes think DMD's memory should be... garbage collected. I used
 the forbidden phrase!
Doesn't DMD already have a GC that is disabled? -- /Jacob Carlborg
Mar 31 2015
parent reply "Daniel Murphy" <yebbliesnospam gmail.com> writes:
"Jacob Carlborg"  wrote in message news:mfe0dm$2i6l$1 digitalmars.com... 

 Doesn't DMD already have a GC that is disabled?
It did once, but it's been gone for a while now.
Mar 31 2015
parent "Temtaime" <temtaime gmail.com> writes:
I don't use CTFE in my game engine and DMD uses about 600 MB 
memory per file for instance.
Mar 31 2015
prev sibling parent Mathias Lang via Digitalmars-d <digitalmars-d puremagic.com> writes:
2015-03-31 0:53 GMT+02:00 H. S. Teoh via Digitalmars-d <
digitalmars-d puremagic.com>:

 Yeah, dmd memory consumption is way off the charts, because under the
 pretext of compile speed it never frees allocated memory. Unfortunately,
 the assumption that not managing memory == faster quickly becomes untrue
 once dmd runs out of RAM and the OS starts thrashing. Compile times
 quickly skyrocket exponentially as everything gets stuck on I/O.

 This is one of the big reasons I can't use D on my work PC, because it's
 an older machine with limited RAM, and when DMD is running the whole box
 slows down to an unusable crawl.

 This is not the first time this issue was brought up, but it seems
 nobody in the compiler team cares enough to do anything about it. :-(


 T

 --
 Lottery: tax on the stupid. -- Slashdotter
I can relate. DMD compilation speed was nothing but a myth to me until I migrated from 4GBs to 8 GBs. And everytime I compiled something, my computer froze for a few seconds (or a few minutes, depending of the project).
Mar 30 2015