www.digitalmars.com         C & C++   DMDScript  

digitalmars.D - Concern about dmd memory usage on win32

reply "monarch_dodra" <monarchdodra gmail.com> writes:
In particular, when compiling "-unittest std\algorithm.d", dmd 
uses *nearly* 1 GB (it uses about 1,051,176K on my machine).

Problem is that when it reaches 1GB, it crashes. I have a pull 
request which adds a few unittests to algorithm, and it is 
consistently crashing on win32 with an out of memory error.

In layman's terms: std\algorithm.d is full. You literally can't 
add any more unittests to it, without crashing dmd on win32.

I'd have recommended splitting the unittests in sub-modules or 
whatnot, but come to think about it, I'm actually more concern 
that a module could *singlehandedly* make the compiler crash on 
out of memory...

Also, I'm no expert, but why is my dmd limited to 1 GB memory on 
my 64 bit machine...?
Dec 07 2012
next sibling parent "monarch_dodra" <monarchdodra gmail.com> writes:
On Friday, 7 December 2012 at 14:23:39 UTC, monarch_dodra wrote:
 [SNIP]
Come to think about it, what's funny is that the pull *used* to pass, but it doesn't anymore, and I haven't changed anything in the mean-time. There *have* been changes to algorithm, so that may be it, but it may also be a performance regression of dmd. Again, I'm no expert, so all I can do is report :/
Dec 07 2012
prev sibling next sibling parent Dmitry Olshansky <dmitry.olsh gmail.com> writes:
12/7/2012 6:23 PM, monarch_dodra пишет:
 In particular, when compiling "-unittest std\algorithm.d", dmd uses
 *nearly* 1 GB (it uses about 1,051,176K on my machine).

 Problem is that when it reaches 1GB, it crashes. I have a pull request
 which adds a few unittests to algorithm, and it is consistently crashing
 on win32 with an out of memory error.
Yup, it dies the same way on auto-tester for my pull.
 In layman's terms: std\algorithm.d is full. You literally can't add any
 more unittests to it, without crashing dmd on win32.

 I'd have recommended splitting the unittests in sub-modules or whatnot,
 but come to think about it, I'm actually more concern that a module
 could *singlehandedly* make the compiler crash on out of memory...

 Also, I'm no expert, but why is my dmd limited to 1 GB memory on my 64
 bit machine...?
It's not large address-space aware (a matter of a proper bit set in PE header) thus is limited to 2Gb. The other part of problem has to do with the way DMC run-time allocates virtual memory. Somebody fixed it but the patch failed to get any recognition. -- Dmitry Olshansky
Dec 07 2012
prev sibling next sibling parent reply Jonathan M Davis <jmdavisProg gmx.com> writes:
On Friday, December 07, 2012 15:23:36 monarch_dodra wrote:
 In particular, when compiling "-unittest std\algorithm.d", dmd
 uses *nearly* 1 GB (it uses about 1,051,176K on my machine).
 
 Problem is that when it reaches 1GB, it crashes. I have a pull
 request which adds a few unittests to algorithm, and it is
 consistently crashing on win32 with an out of memory error.
 
 In layman's terms: std\algorithm.d is full. You literally can't
 add any more unittests to it, without crashing dmd on win32.
 
 I'd have recommended splitting the unittests in sub-modules or
 whatnot, but come to think about it, I'm actually more concern
 that a module could *singlehandedly* make the compiler crash on
 out of memory...
 
 Also, I'm no expert, but why is my dmd limited to 1 GB memory on
 my 64 bit machine...?
If you look in win32.mak, you'll see that the source files are split into separate groups (STD_1_HEAVY, STD_2_HEAVY, STD_3, STD_4, etc.). This is specifically to combat this problem. Every time that we reach the point that the compilation starts running out of memory again, we add more groups and/or rearrange them. It's suboptimal, but I don't know what else we can do at this point given dmd's limitations on 32-bit Windows. - Jonathan M Davis
Dec 07 2012
next sibling parent reply Nick Sabalausky <SeeWebsiteToContactMe semitwist.com> writes:
On Fri, 07 Dec 2012 08:23:06 -0800
Jonathan M Davis <jmdavisProg gmx.com> wrote:

 On Friday, December 07, 2012 15:23:36 monarch_dodra wrote:
 In particular, when compiling "-unittest std\algorithm.d", dmd
 uses *nearly* 1 GB (it uses about 1,051,176K on my machine).
 
 Problem is that when it reaches 1GB, it crashes. I have a pull
 request which adds a few unittests to algorithm, and it is
 consistently crashing on win32 with an out of memory error.
 
 In layman's terms: std\algorithm.d is full. You literally can't
 add any more unittests to it, without crashing dmd on win32.
 
 I'd have recommended splitting the unittests in sub-modules or
 whatnot, but come to think about it, I'm actually more concern
 that a module could *singlehandedly* make the compiler crash on
 out of memory...
 
 Also, I'm no expert, but why is my dmd limited to 1 GB memory on
 my 64 bit machine...?
If you look in win32.mak, you'll see that the source files are split into separate groups (STD_1_HEAVY, STD_2_HEAVY, STD_3, STD_4, etc.). This is specifically to combat this problem. Every time that we reach the point that the compilation starts running out of memory again, we add more groups and/or rearrange them. It's suboptimal, but I don't know what else we can do at this point given dmd's limitations on 32-bit Windows.
Sooo...what's the status of fixing DMD's forever-standing memory usage issues? My understanding is that the big issues are: 1. CTFE allocates every time a CTFE variable's value is changed. 2. The GC inside DMD is disabled because it broke things, so it never releases memory. Is this correct? If so, what's the current status of fixes? It seems to me this would be something that should be creeping higher and higher up the priority list (if it hasn't already been doing so).
Dec 07 2012
parent reply "Jonathan M Davis" <jmdavisProg gmx.com> writes:
On Friday, December 07, 2012 12:07:06 Nick Sabalausky wrote:
 Sooo...what's the status of fixing DMD's forever-standing memory usage
 issues?
 
 My understanding is that the big issues are:
 
 1. CTFE allocates every time a CTFE variable's value is changed.
 
 2. The GC inside DMD is disabled because it broke things, so it never
 releases memory.
 
 Is this correct? If so, what's the current status of fixes? It seems to
 me this would be something that should be creeping higher and higher up
 the priority list (if it hasn't already been doing so).
The GC didn't break things per se. It was just made compilation much slower, and Walter didn't have time to fix it at the time (as dmd was close to a release), so it was disable. But someone needs to take the time to work on it and make it efficient enough to use (possibly doing stuff like making it so that it only kicks in at least a certain amount of memory is used to keep the common case fast but make the memory-intensive cases work). And no one has done that. Walter has been busy with other stuff and has made it clear that it's likely going to need to be someone else who steps up and fixes it, so we're stuck until soemone does that. As for CTFE, I don't know what the current state is. Don has plans, but I get the impression that he's too busy to get much done with them these days. We're dealing with a problem that requires some of our key developers (or someone to put in enough time and effort to learn much of what they know) in order to get it done, so it's fallen by the wayside thus far. - Jonathan M Davis
Dec 07 2012
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 12/7/12 1:43 PM, Jonathan M Davis wrote:
 The GC didn't break things per se. It was just made compilation much slower,
 and Walter didn't have time to fix it at the time (as dmd was close to a
 release), so it was disable. But someone needs to take the time to work on it
 and make it efficient enough to use (possibly doing stuff like making it so
that
 it only kicks in at least a certain amount of memory is used to keep the
 common case fast but make the memory-intensive cases work). And no one has
 done that.
I suggested this several times: work the GC so it only intervenes if the consumed memory would otherwise be prohibitively large. That way there's never a collection during normal compilation. Andrei
Dec 07 2012
parent reply "deadalnix" <deadalnix gmail.com> writes:
On Friday, 7 December 2012 at 22:30:59 UTC, Andrei Alexandrescu 
wrote:
 On 12/7/12 1:43 PM, Jonathan M Davis wrote:
 The GC didn't break things per se. It was just made 
 compilation much slower,
 and Walter didn't have time to fix it at the time (as dmd was 
 close to a
 release), so it was disable. But someone needs to take the 
 time to work on it
 and make it efficient enough to use (possibly doing stuff like 
 making it so that
 it only kicks in at least a certain amount of memory is used 
 to keep the
 common case fast but make the memory-intensive cases work). 
 And no one has
 done that.
I suggested this several times: work the GC so it only intervenes if the consumed memory would otherwise be prohibitively large. That way there's never a collection during normal compilation. Andrei
Nobody told you that the GC was THAT SLOW that it as even slower than the swap ? You really know nothing about optimization, don't you ?
Dec 07 2012
next sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 12/7/12 5:37 PM, deadalnix wrote:
 On Friday, 7 December 2012 at 22:30:59 UTC, Andrei Alexandrescu wrote:
 On 12/7/12 1:43 PM, Jonathan M Davis wrote:
 The GC didn't break things per se. It was just made compilation much
 slower,
 and Walter didn't have time to fix it at the time (as dmd was close to a
 release), so it was disable. But someone needs to take the time to
 work on it
 and make it efficient enough to use (possibly doing stuff like making
 it so that
 it only kicks in at least a certain amount of memory is used to keep the
 common case fast but make the memory-intensive cases work). And no
 one has
 done that.
I suggested this several times: work the GC so it only intervenes if the consumed memory would otherwise be prohibitively large. That way there's never a collection during normal compilation. Andrei
Nobody told you that the GC was THAT SLOW that it as even slower than the swap ? You really know nothing about optimization, don't you ?
This is not even remotely appropriate. Where did it come from? Andrei
Dec 07 2012
parent reply "deadalnix" <deadalnix gmail.com> writes:
On Friday, 7 December 2012 at 22:39:09 UTC, Andrei Alexandrescu 
wrote:
 Nobody told you that the GC was THAT SLOW that it as even 
 slower than
 the swap ? You really know nothing about optimization, don't 
 you ?
This is not even remotely appropriate. Where did it come from?
Sorry, I was trying to be ironic. People keep telling that the GC was removed because it was too slow, but it is pretty clear that, however how slow it is, it is not slower than swapping. I more and more irritated by the fact that we justify everything here because it have to be fast or whatever, and at the end, we have unreliable software that aren't even that fast in many cases. I'm working on a program that now require more than 2.5Gb of RAM to compile, where separate compilation is not possible due to bug 8997 and that randomly fails to compile due to bug 8596. It is NOT fast and that insane memory consumption is a major cause of slowness. make; make; make; make; make is the new make.
Dec 07 2012
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 12/7/2012 2:51 PM, deadalnix wrote:
 I'm working on a program that now require more than 2.5Gb of RAM to compile,
 where separate compilation is not possible due to bug 8997 and that randomly
 fails to compile due to bug 8596. It is NOT fast and that insane memory
 consumption is a major cause of slowness.
I'm pretty sure the memory consumption happens with CTFE and Don is working on it.
Dec 10 2012
next sibling parent "deadalnix" <deadalnix gmail.com> writes:
On Monday, 10 December 2012 at 12:46:19 UTC, Walter Bright wrote:
 On 12/7/2012 2:51 PM, deadalnix wrote:
 I'm working on a program that now require more than 2.5Gb of 
 RAM to compile,
 where separate compilation is not possible due to bug 8997 and 
 that randomly
 fails to compile due to bug 8596. It is NOT fast and that 
 insane memory
 consumption is a major cause of slowness.
I'm pretty sure the memory consumption happens with CTFE and Don is working on it.
I don't have a lot of it, but surely some.
Dec 10 2012
prev sibling parent reply evansl <cppljevans suddenlink.net> writes:
On 12/10/12 06:45, Walter Bright wrote:
 On 12/7/2012 2:51 PM, deadalnix wrote:
 I'm working on a program that now require more than 2.5Gb of RAM to
 compile,
 where separate compilation is not possible due to bug 8997 and that
 randomly
 fails to compile due to bug 8596. It is NOT fast and that insane memory
 consumption is a major cause of slowness.
I'm pretty sure the memory consumption happens with CTFE and Don is working on it.
The following quote: it also gives very detailed information that indicates which parts of your program are responsible for allocating the heap memory. from here: http://valgrind.org/docs/manual/ms-manual.html suggests massif might be some help in narrowing down the cause.
Dec 10 2012
parent "deadalnix" <deadalnix gmail.com> writes:
On Tuesday, 11 December 2012 at 04:30:38 UTC, evansl wrote:
 On 12/10/12 06:45, Walter Bright wrote:
 On 12/7/2012 2:51 PM, deadalnix wrote:
 I'm working on a program that now require more than 2.5Gb of 
 RAM to
 compile,
 where separate compilation is not possible due to bug 8997 
 and that
 randomly
 fails to compile due to bug 8596. It is NOT fast and that 
 insane memory
 consumption is a major cause of slowness.
I'm pretty sure the memory consumption happens with CTFE and Don is working on it.
The following quote: it also gives very detailed information that indicates which parts of your program are responsible for allocating the heap memory. from here: http://valgrind.org/docs/manual/ms-manual.html suggests massif might be some help in narrowing down the cause.
The problem with valgrind is that it does increase quite a lot the memory consumption of the program.
Dec 11 2012
prev sibling parent reply "bearophile" <bearophileHUGS lycos.com> writes:
deadalnix:

 Nobody told you that the GC was THAT SLOW that it as even 
 slower than the swap ? You really know nothing about 
 optimization, don't you ?
Please be gentle in the forums, even with a person as strong as Andrei. Thank you. Bye, bearophile
Dec 07 2012
parent "deadalnix" <deadalnix gmail.com> writes:
On Friday, 7 December 2012 at 22:42:16 UTC, bearophile wrote:
 deadalnix:

 Nobody told you that the GC was THAT SLOW that it as even 
 slower than the swap ? You really know nothing about 
 optimization, don't you ?
Please be gentle in the forums, even with a person as strong as Andrei. Thank you. Bye, bearophile
I never meant to say that Andrei was wrong, that the complete opposite. I made my point poorly and I apologize for that. I seemed obvious to me that the GC couldn't be slower than the swap and that nobody would take it seriously.
Dec 07 2012
prev sibling next sibling parent reply "monarch_dodra" <monarchdodra gmail.com> writes:
On Friday, 7 December 2012 at 16:23:49 UTC, Jonathan M Davis 
wrote:
 If you look in win32.mak, you'll see that the source files are 
 split into
 separate groups (STD_1_HEAVY, STD_2_HEAVY, STD_3, STD_4, etc.). 
 This is
 specifically to combat this problem. Every time that we reach 
 the point that
 the compilation starts running out of memory again, we add more 
 groups and/or
 rearrange them. It's suboptimal, but I don't know what else we 
 can do at this
 point given dmd's limitations on 32-bit Windows.

 - Jonathan M Davis
I had actually been through this before, and someone told me about that. The problem at this point is that this isn't even an option anymore, since std/algorithm.d is in a group *alone*. On Friday, 7 December 2012 at 17:07:10 UTC, Nick Sabalausky wrote:
 [SNIP]
 Is this correct? If so, what's the current status of fixes? It 
 seems to
 me this would be something that should be creeping higher and 
 higher up
 the priority list (if it hasn't already been doing so).
What he said.
Dec 07 2012
next sibling parent reply "Jonathan M Davis" <jmdavisProg gmx.com> writes:
On Friday, December 07, 2012 18:18:43 monarch_dodra wrote:
 I had actually been through this before, and someone told me
 about that. The problem at this point is that this isn't even an
 option anymore, since std/algorithm.d is in a group *alone*.
Then I'd have two suggestions then: 1. Figure out which tests are too expensive. Either disable them or make them less expensive. If we can't test as much as we need to right now, then the less critical tests will just need to be disabled as ugly as that may be. 2. version stuff. Either specifically version some of it out on Windows (or maybe just 32-bit Windows now that we have 64-bit Windows), or put some of the less critical stuff in version blocks that can be explicitly enabled by someone working on Phobos. The two should probably be combined though. Figure out which tests are problematic and version out the less critical ones on 32-bit Windows. Then you can create a bug report for those tests specifically and anyone working on the memory problem with have somethingh specific to run. Much of std.datetime's tests used to have to be disabled with version blocks on Windows until the compiler was improved enough and/or the tests were adjusted enough that they could be run on Windows (adjusting the makefile probably helped as well). Templates and CTFE are particularly expensive, so fantastic tricks like using foreach with TypeTuple can really cost a lot with dmd's current memory issues, and std.algorithm may just not be able to afford some of the better tests right now as much as that sucks. - Jonathan M Davis
Dec 07 2012
parent reply "monarch_dodra" <monarchdodra gmail.com> writes:
On Friday, 7 December 2012 at 18:44:08 UTC, Jonathan M Davis 
wrote:
 On Friday, December 07, 2012 18:18:43 monarch_dodra wrote:
 I had actually been through this before, and someone told me
 about that. The problem at this point is that this isn't even 
 an
 option anymore, since std/algorithm.d is in a group *alone*.
Then I'd have two suggestions then: [SNIP] - Jonathan M Davis
I just tried to version each unittest (for win32) into two different objects (using version blocks). This worked relatively well, up until the final link step, where I was greeted with: //---- unittest5b.obj(unittest5b) Offset 00D68H Record Type 0091 Error 1: Previous Definition Different : _D3std9algorithm6EditOp6__initZ unittest5b.obj(unittest5b) Offset 685C4H Record Type 0091 Error 1: Previous Definition Different : _D3std9algorithm12__ModuleInfoZ unittest5b.obj(unittest5b) Offset 7ACEBH Record Type 00C3 Error 1: Previous Definition Different : __D3std9algorithm9__modtestFZv unittest5b.obj(unittest5b) Offset 7AFB4H Record Type 00C3 Error 1: Previous Definition Different : _D3std9algorithm7__arrayZ unittest5b.obj(unittest5b) Offset 7AFE0H Record Type 00C3 Error 1: Previous Definition Different : _D3std9algorithm8__assertFiZv unittest5b.obj(unittest5b) Offset 7B00CH Record Type 00C3 Error 1: Previous Definition Different : _D3std9algorithm15__unittest_failFiZv //---- The one I'm *really* concerned about is "ModuleInfo": My guess is that I'll never get rid of this error :/ I figure the "easy workaround", it to create a new dedicated executable, which tests just algorithm...? I don't think deactivating unit tests is a great move anyways...
Dec 10 2012
parent "Jonathan M Davis" <jmdavisProg gmx.com> writes:
On Monday, December 10, 2012 13:34:19 monarch_dodra wrote:
 On Friday, 7 December 2012 at 18:44:08 UTC, Jonathan M Davis
 
 wrote:
 On Friday, December 07, 2012 18:18:43 monarch_dodra wrote:
 I had actually been through this before, and someone told me
 about that. The problem at this point is that this isn't even
 an
 option anymore, since std/algorithm.d is in a group *alone*.
Then I'd have two suggestions then: [SNIP] - Jonathan M Davis
I just tried to version each unittest (for win32) into two different objects (using version blocks). This worked relatively well, up until the final link step, where I was greeted with:
[snip] Different versions of the same module have to be done in separate builds. They couldn't all be in the same build. The Windows build is one executable, and it's been rejected to change that (for some good reasons - though it does cause problems here), so versioning the tests means that some of them won't be run as part of the normal unittest build. They'll have to be run manually or will only be run on other OSes which can handle the memory consumption. For quite a while, a lot of std.datetime's unit tests were just outright disabled on Windows, because the Windows version of dmd couldn't handle it. It's sounding like we're going to have to do the same with some of std.algorithm's unit tests until dmd's issues can be sorted out. - Jonathan M Davis
Dec 10 2012
prev sibling next sibling parent "Jonathan M Davis" <jmdavisProg gmx.com> writes:
On Friday, December 07, 2012 19:43:50 Jonathan M Davis wrote:
 Then I'd have two suggestions then:
I really need to reread my posts more before actually posting them... - Jonathan M Davis
Dec 07 2012
prev sibling parent Andrej Mitrovic <andrej.mitrovich gmail.com> writes:
On 12/7/12, Jonathan M Davis <jmdavisProg gmx.com> wrote:
 Then I'd have two suggestions then:
Actually there is one other way, use version specifiers and then re-compile using different versions, e.g.: <line 1> version(StdAlgTest1) { <..lots of tests...> } <line 1000> version(StdAlgTest2) { <..lots of tests...> } <line 2000> version(StdAlgTest3) { <..lots of tests...> } Then the makefile would have to compile algorithm.d 3 times, via something like: $ rdmd --unittest -version=StdAlgTest1 --main std\algorithm.d $ rdmd --unittest -version=StdAlgTest2 --main std\algorithm.d $ rdmd --unittest -version=StdAlgTest3 --main std\algorithm.d In fact, why aren't we taking advantage and using rdmd already instead of using a seperate unittest.d file? I've always used rdmd to test my Phobos changes, it works very simple this way. All it takes to test a module is to pass the --unittest and --main flags and the module name.
Dec 07 2012
prev sibling parent reply "deadalnix" <deadalnix gmail.com> writes:
On Friday, 7 December 2012 at 16:23:49 UTC, Jonathan M Davis 
wrote:
 If you look in win32.mak, you'll see that the source files are 
 split into
 separate groups (STD_1_HEAVY, STD_2_HEAVY, STD_3, STD_4, etc.). 
 This is
 specifically to combat this problem. Every time that we reach 
 the point that
 the compilation starts running out of memory again, we add more 
 groups and/or
 rearrange them. It's suboptimal, but I don't know what else we 
 can do at this
 point given dmd's limitations on 32-bit Windows.

 - Jonathan M Davis
I don't know? Maybe disabling the GC because it slowed down dmd wasn't a good idea after all. Who care about a fast compiler is that one crashes ? It does crash ! Yes but at least, it is fast !
Dec 07 2012
next sibling parent Andrej Mitrovic <andrej.mitrovich gmail.com> writes:
On 12/7/12, deadalnix <deadalnix gmail.com> wrote:
 It does crash ! Yes but at least, it is fast !
Except it's not fast. It's slow and it crashes. It's a Yugo.
Dec 07 2012
prev sibling parent "Jonathan M Davis" <jmdavisProg gmx.com> writes:
On Friday, December 07, 2012 21:21:17 deadalnix wrote:
 On Friday, 7 December 2012 at 16:23:49 UTC, Jonathan M Davis
 
 wrote:
 If you look in win32.mak, you'll see that the source files are
 split into
 separate groups (STD_1_HEAVY, STD_2_HEAVY, STD_3, STD_4, etc.).
 This is
 specifically to combat this problem. Every time that we reach
 the point that
 the compilation starts running out of memory again, we add more
 groups and/or
 rearrange them. It's suboptimal, but I don't know what else we
 can do at this
 point given dmd's limitations on 32-bit Windows.
 
 - Jonathan M Davis
I don't know? Maybe disabling the GC because it slowed down dmd wasn't a good idea after all. Who care about a fast compiler is that one crashes ? It does crash ! Yes but at least, it is fast !
Most programs compile just fine as things are, and Walter cares a _lot_ about speed of compilation, so doing something harms the common case in favor of a less common one that doesn't even work right now didn't seem like a good idea. But really what it comes down to is that it was an experimental feature that clearly had problems, so it was temporarily disabled until it could be sorted out. All that means is that things were left exactly as they were rather than introducing a new element that could have caused problems. Further investigation and work _does_ need to be done, but without proper testing and further work being done on it, it probably _isn't_ a good idea to enable it. As with many things around here, the trick is that someone needs to spend time working on the problem, and no one has done so yet. - Jonathan M Davis
Dec 07 2012
prev sibling next sibling parent "Jonathan M Davis" <jmdavisProg gmx.com> writes:
On Friday, December 07, 2012 20:04:54 Andrej Mitrovic wrote:
 On 12/7/12, Jonathan M Davis <jmdavisProg gmx.com> wrote:
 Then I'd have two suggestions then:
Actually there is one other way, use version specifiers and then re-compile using different versions, e.g.: <line 1> version(StdAlgTest1) { <..lots of tests...> } <line 1000> version(StdAlgTest2) { <..lots of tests...> } <line 2000> version(StdAlgTest3) { <..lots of tests...> } Then the makefile would have to compile algorithm.d 3 times, via something like: $ rdmd --unittest -version=StdAlgTest1 --main std\algorithm.d $ rdmd --unittest -version=StdAlgTest2 --main std\algorithm.d $ rdmd --unittest -version=StdAlgTest3 --main std\algorithm.d In fact, why aren't we taking advantage and using rdmd already instead of using a seperate unittest.d file? I've always used rdmd to test my Phobos changes, it works very simple this way. All it takes to test a module is to pass the --unittest and --main flags and the module name.
The windows build purposefully creates one executable in order to catch stuff like circular dependencies and to give dmd a larger project to compile at once in order to test dmd. Clearly, we're running into issues with that due to dmd's lack of capabilities when it comes to memory. But Walter has rejected all proposals to change it, and to some extent, I think that he's right. If anything, this is just highlighting an area that dmd really needs to be improved. All of the messing around that we've done with the makefiles is just hiding the problem. The POSIX builds do build the modules separately for unit tests, though they don't use dmd. I would point out though that as it stands, it wouldn't work to use rdmd, because it's in the tools project and not in dmd, druntime, or Phobos. Rather, it depends on them, so they can't depend on it. - Jonathan M Davis
Dec 07 2012
prev sibling next sibling parent Jonathan M Davis <jmdavisProg gmx.com> writes:
On Friday, December 07, 2012 15:23:36 monarch_dodra wrote:
 In particular, when compiling "-unittest std\algorithm.d", dmd
 uses *nearly* 1 GB (it uses about 1,051,176K on my machine).
 
 Problem is that when it reaches 1GB, it crashes. I have a pull
 request which adds a few unittests to algorithm, and it is
 consistently crashing on win32 with an out of memory error.
 
 In layman's terms: std\algorithm.d is full. You literally can't
 add any more unittests to it, without crashing dmd on win32.
 
 I'd have recommended splitting the unittests in sub-modules or
 whatnot, but come to think about it, I'm actually more concern
 that a module could *singlehandedly* make the compiler crash on
 out of memory...
 
 Also, I'm no expert, but why is my dmd limited to 1 GB memory on
 my 64 bit machine...?
Don just made some changes to Tuple so that it no longer uses std.metastrings.Format (which is _insanely_ inefficient), which may fix the memory problem that you were running into. Obviously, dmd still has issues, but a major one in the library was fixed (though we should really look at killing std.metastrings entirely, because it's too inefficient to reasonably use and the new format should now work with CTFE, unlike the old one - which is why Format exists in the first place). - Jonathan M Davis
Dec 11 2012
prev sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 12/7/2012 6:23 AM, monarch_dodra wrote:
 Also, I'm no expert, but why is my dmd limited to 1 GB memory on my 64 bit
 machine...?
The latest beta I uploaded increases the limit to 2 GB (thanks to a patch by Rainer Schuetze).
Dec 11 2012
parent reply "monarch_dodra" <monarchdodra gmail.com> writes:
On Tuesday, 11 December 2012 at 22:11:55 UTC, Walter Bright wrote:
 On 12/7/2012 6:23 AM, monarch_dodra wrote:
 Also, I'm no expert, but why is my dmd limited to 1 GB memory 
 on my 64 bit
 machine...?
The latest beta I uploaded increases the limit to 2 GB (thanks to a patch by Rainer Schuetze).
Is this patch in the main github release, or is there something special to change in the DMD makefile? I'm still having trouble, and am now having to deactivate some of algorithm's unittest to compile it, even without any changes. Any idea what I'm doing wrong?
Jan 14 2013
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 1/14/2013 1:35 PM, monarch_dodra wrote:
 On Tuesday, 11 December 2012 at 22:11:55 UTC, Walter Bright wrote:
 On 12/7/2012 6:23 AM, monarch_dodra wrote:
 Also, I'm no expert, but why is my dmd limited to 1 GB memory on my 64 bit
 machine...?
The latest beta I uploaded increases the limit to 2 GB (thanks to a patch by Rainer Schuetze).
Is this patch in the main github release, or is there something special to change in the DMD makefile? I'm still having trouble, and am now having to deactivate some of algorithm's unittest to compile it, even without any changes. Any idea what I'm doing wrong?
Durn, I don't remember what the patch was.
Jan 14 2013
parent reply Rainer Schuetze <r.sagitario gmx.de> writes:
On 14.01.2013 23:35, Walter Bright wrote:
 On 1/14/2013 1:35 PM, monarch_dodra wrote:
 On Tuesday, 11 December 2012 at 22:11:55 UTC, Walter Bright wrote:
 On 12/7/2012 6:23 AM, monarch_dodra wrote:
 Also, I'm no expert, but why is my dmd limited to 1 GB memory on my
 64 bit
 machine...?
The latest beta I uploaded increases the limit to 2 GB (thanks to a patch by Rainer Schuetze).
Is this patch in the main github release, or is there something special to change in the DMD makefile? I'm still having trouble, and am now having to deactivate some of algorithm's unittest to compile it, even without any changes. Any idea what I'm doing wrong?
Durn, I don't remember what the patch was.
The patch was in the heap allocation of dmc's runtime library snn.lib. The new lib seems to be in dmd 2.061, but it's probably not picked up when linking with dmc, only when building D files. It has to be copied to dmc's lib path aswell to have an effect on dmd itself.
Jan 14 2013
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 1/14/2013 2:57 PM, Rainer Schuetze wrote:
 The patch was in the heap allocation of dmc's runtime library snn.lib. The new
 lib seems to be in dmd 2.061, but it's probably not picked up when linking with
 dmc, only when building D files. It has to be copied to dmc's lib path aswell
to
 have an effect on dmd itself.
Ah, that's right. Need to make sure you're using \dmd\windows\lib\snn.lib.
Jan 14 2013
parent "monarch_dodra" <monarchdodra gmail.com> writes:
On Tuesday, 15 January 2013 at 00:17:26 UTC, Walter Bright wrote:
 On 1/14/2013 2:57 PM, Rainer Schuetze wrote:
 The patch was in the heap allocation of dmc's runtime library 
 snn.lib. The new
 lib seems to be in dmd 2.061, but it's probably not picked up 
 when linking with
 dmc, only when building D files. It has to be copied to dmc's 
 lib path aswell to
 have an effect on dmd itself.
Ah, that's right. Need to make sure you're using \dmd\windows\lib\snn.lib.
Ah. I was still running with the lbs supplied in 2.060. That explains it then. Thanks!
Jan 14 2013