www.digitalmars.com         C & C++   DMDScript  

digitalmars.D - Memory allocation problem

reply bearophile <bearophileHUGS lycos.com> writes:
In a small program on Windows XP I have to allocate a large chunk of RAM, about
1847 MB of RAM. This PC has 2 GB RAM. So I use std.c.stdio.malloc(), with DMD
v1.042 (or v2.031). But it's not able to allocate it, and produces at runtime:
Error: Access Violation

An equal program written in C and compiled with GCC, that allocates the same
amount of memory using malloc, is able to run (with just a little hard disk
trashing at the beginning).
Do you know why DMD doesn't allow me to allocate more RAM, and can this be
fixed?

Bye,
bearophile
Aug 08 2009
next sibling parent reply Jeremie Pelletier <jeremiep gmail.com> writes:
bearophile Wrote:

 In a small program on Windows XP I have to allocate a large chunk of RAM,
about 1847 MB of RAM. This PC has 2 GB RAM. So I use std.c.stdio.malloc(), with
DMD v1.042 (or v2.031). But it's not able to allocate it, and produces at
runtime:
 Error: Access Violation
 
 An equal program written in C and compiled with GCC, that allocates the same
amount of memory using malloc, is able to run (with just a little hard disk
trashing at the beginning).
 Do you know why DMD doesn't allow me to allocate more RAM, and can this be
fixed?
 
 Bye,
 bearophile

i just did the test on my 6Gb ram laptop, it failed, the exception comes from the error handling crashing on OutOfMemory exceptions (I think). Your C program probably compiled in 64bit, which has MUCH more room for virtual memory. DMD only compiles 32bit binaries so far, and the virtual memory it can map for a single process is 4Gb, 2 of which are reserved for shared system memory. What that means is, when you allocate memory, even if you DO have 2Gb available in ram or pagefile, you may not have enough contiguous free pages of virtual memory to map the allocation, 64bit applications rarely hit that wall anymore. You could always try and stream the memory and allocate the minimum required, and work on it one slice at a time.
Aug 08 2009
next sibling parent reply bearophile <bearophileHUGS lycos.com> writes:
Frank Benoit:

Is it the malloc that fails (returning null) or the handling of the block?<

The malloc, as I have written. But only when used by DMD. -------------------- Jeremie Pelletier:
 Your C program probably compiled in 64bit, which has MUCH more room for
virtual memory.

I don't think so, I am running a 32 bit GCC on a 32 bit XP operating system. I think the bug is elsewhere (in DMD). Bye, bearophile
Aug 09 2009
parent reply Robert Fraser <fraserofthenight gmail.com> writes:
bearophile wrote:
 I don't think so, I am running a 32 bit GCC on a 32 bit XP operating system.
 I think the bug is elsewhere (in DMD).

Have you tried with DMC?
Aug 09 2009
parent reply bearophile <bearophileHUGS lycos.com> writes:
Frank Benoit:

Is it the malloc that fails (returning null) or the handling of the block?<

It's the filling of the memory block. malloc by itself doesn't crash. --------------------- Robert Fraser:
 Have you tried with DMC?<

I have done a test with MDC too now. // D code import std.c.stdlib: malloc; void main() { int i; int n = (1800 * 1000 * 1000) / double.sizeof; double* p = cast(double*)malloc(n * double.sizeof); for (i = 0; i < n; i++) p[i] = 1.0; } // C code #include "stdlib.h" int main() { int i; int n = (1800 * 1000 * 1000) / sizeof(double); double *p = (double*)malloc(n * sizeof(double)); for (i = 0; i < n; i++) p[i] = 1.0; return 0; } With DMD that D code produces the: Error: Access Violation That C code compiled with GCC (just gcc test.c -o test), works and just has to move some memory on disk. That C code compiled with DMC works at first and then crashes badly at runtime after a moment. Bye, bearophile
Aug 09 2009
parent reply grauzone <none example.net> writes:
bearophile wrote:
 Frank Benoit:
 
 Is it the malloc that fails (returning null) or the handling of the block?<

It's the filling of the memory block. malloc by itself doesn't crash.

Then what is there to complain? You know you must check return values. The D allocation probably fails due to memory fragmentation (just a guess).
Aug 09 2009
parent reply bearophile <bearophileHUGS lycos.com> writes:
grauzone:
Then what is there to complain?<

I have paid for 2 GB RAM, so I am allowed to desire a 1800 MB array :-)
You know you must check return values.<

In real programs I check the return value of malloc, of course.
The D allocation probably fails due to memory fragmentation (just a guess).<

The D program always fails, the C program always runs. So then it's a fragmentation that hurts the allocator of DMD only...? Bye, bearophile
Aug 09 2009
next sibling parent reply Robert Fraser <fraserofthenight gmail.com> writes:
bearophile wrote:
 grauzone:
 Then what is there to complain?<

I have paid for 2 GB RAM, so I am allowed to desire a 1800 MB array :-)

I agree it's a bug, and probably a rather major one. However in a real use case, any program that needs 1800+ MB arrays should be 64-bit only.
Aug 10 2009
parent reply bearophile <bearophileHUGS lycos.com> writes:
Robert Fraser:
 I agree it's a bug, and probably a rather major one. However in a real 
 use case, any program that needs 1800+ MB arrays should be 64-bit only.

In that program there's essentially just that large array. What is the size of biggest array you suggest to use in a D/C program on a 32 bit OS running on a 2 GB RAM PC? Bye, bearophile
Aug 10 2009
parent Jeremie Pelletier <jeremiep gmail.com> writes:
bearophile Wrote:

 Robert Fraser:
 I agree it's a bug, and probably a rather major one. However in a real 
 use case, any program that needs 1800+ MB arrays should be 64-bit only.

In that program there's essentially just that large array. What is the size of biggest array you suggest to use in a D/C program on a 32 bit OS running on a 2 GB RAM PC? Bye, bearophile

I don't think there's any ideal value for a max allocation size, I can't even think of an use for such large arrays. There is always a way to split the allocation in smaller ones which will be easy to map in the available virtual memory space. If its a single stream, loading it all in memory at once is overkill, it would be more efficient to create a specialized range to load something like 0x1000 bytes at once (aligned to the same boundary) and operate on this slice.
Aug 10 2009
prev sibling parent bearophile <bearophileHUGS lycos.com> writes:
Steven Schveighoffer:
 My point is, don't count on having 2GB of usable space even if you  
 physically have 2GB of RAM, it may not be the case.

I was looking for using 1.8 GB not 2.
Better off to not desire than to complain about edge conditions based on
hardware limitations.<

With C I can use up to about 1.94 GB of RAM, I don't think 1.8 is so on the edge :-) I think this is a bug of DMD, it's not a problem of my PC. Bye, bearophile
Aug 10 2009
prev sibling next sibling parent reply "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Sun, 09 Aug 2009 15:51:46 -0400, bearophile <bearophileHUGS lycos.com>  
wrote:

 grauzone:
 Then what is there to complain?<

I have paid for 2 GB RAM, so I am allowed to desire a 1800 MB array :-)

hehe. Not necessarily. There are often hardware/OS limitations. For example, many 32-bit Intel chipsets support 4GB of memory but only allow you to use 3GB because 1GB of *address space* is used for PCI registers. What ends up happening is you waste 1GB of memory. However, it's advantageous to use 4GB instead of 3GB because of the memory parity -- each channel should have the same amount of memory. I worked for a company that built such systems for some customers. It was a pain because we wanted to ensure all the memory was tested, but there was no way to physically test it... In other cases, there may be video hardware that shares memory with your OS, leaving you less available space. My point is, don't count on having 2GB of usable space even if you physically have 2GB of RAM, it may not be the case. Better off to not desire than to complain about edge conditions based on hardware limitations. It's like complaining that your car doesn't work properly at 150Mph even though the spedometer goes that high. -Steve
Aug 10 2009
parent BCS <ao pathlink.com> writes:
Reply to Steven,

 On Sun, 09 Aug 2009 15:51:46 -0400, bearophile
 <bearophileHUGS lycos.com>  wrote:
 
 grauzone:
 
 Then what is there to complain?<
 

:-)

There are often hardware/OS limitations. For example, many 32-bit Intel chipsets support 4GB of memory but only allow you to use 3GB because 1GB of *address space* is used for PCI registers. What ends up happening is you waste 1GB of memory. However, it's advantageous to use 4GB instead of 3GB because of the memory parity -- each channel should have the same amount of memory. I worked for a company that built such systems for some customers. It was a pain because we wanted to ensure all the memory was tested, but there was no way to physically test it... In other cases, there may be video hardware that shares memory with your OS, leaving you less available space. My point is, don't count on having 2GB of usable space even if you physically have 2GB of RAM, it may not be the case. Better off to not desire than to complain about edge conditions based on hardware limitations. It's like complaining that your car doesn't work properly at 150Mph even though the spedometer goes that high. -Steve

With virtual memory, the amount of actual RAM you have is only a speed consideration. You can allocate 1.5GB of ram on a system with only 512 MB. It just ends up swapping when you go and use it. The only constraint is the address space and the OS. And with things like me memory mapped files you can even treat the CPU address space as a window on the hard drive and allocate more RAM than you have address space for. This blog post and it's link are an interesting read on this topic: http://blogs.msdn.com/oldnewthing/archive/2009/07/06/9818299.aspx
Aug 10 2009
prev sibling next sibling parent "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Mon, 10 Aug 2009 14:15:41 -0400, bearophile <bearophileHUGS lycos.com>  
wrote:

 Steven Schveighoffer:
 My point is, don't count on having 2GB of usable space even if you
 physically have 2GB of RAM, it may not be the case.

I was looking for using 1.8 GB not 2.
 Better off to not desire than to complain about edge conditions based  
 on hardware limitations.<

With C I can use up to about 1.94 GB of RAM, I don't think 1.8 is so on the edge :-) I think this is a bug of DMD, it's not a problem of my PC.

But you are complaining about limitations for which many factors may contribute. One of them is DMD. I'm not saying there is not a bug in DMD, I'm just saying that expecting to be able to utilize almost all of your RAM for your one application in a system which runs dozens of services along with a GUI and other stuff is expecting too much. BTW, you are not getting 1.94GB of RAM allocated via C, just 1.94GB of virtual space. I know of no "normal" programs which try to allocate all available RAM and perform well. I think this is a non-issue, or at least not a good test case. If you are running XP you should know that 256MB of RAM is the *minimum* requirement, which means the OS probably takes up about 75% of that space. Don't expect to run really fast on a system like that. You are reducing your system to that state when you try to use all RAM on the system. Now, if you allocated 1GB of ram and noticed that your process was using 2GB of virtual space, I'd say that's a bug in DMD (I think someone reported something like that a while ago, not sure if it was fixed). -Steve
Aug 10 2009
prev sibling next sibling parent language_fan <foo bar.com.invalid> writes:
Mon, 10 Aug 2009 14:15:41 -0400, bearophile thusly wrote:

 Steven Schveighoffer:
 My point is, don't count on having 2GB of usable space even if you
 physically have 2GB of RAM, it may not be the case.

I was looking for using 1.8 GB not 2.
Better off to not desire than to complain about edge conditions based on
hardware limitations.<

With C I can use up to about 1.94 GB of RAM, I don't think 1.8 is so on the edge :-) I think this is a bug of DMD, it's not a problem of my PC.

You can also try to buy another 2 GB of RAM and see if the problem persists. On a machine with 4 GB of physical RAM, a 32-bit OS usually can see ~3..3.5 GB free. I even think that if your OS supports page files, it doesn't matter if the memory is on chip or disk - if the memory is split 3/1 for processes and kernel, you should be able to allocate 2 GB easily.
Aug 11 2009
prev sibling parent language_fan <foo bar.com.invalid> writes:
Tue, 11 Aug 2009 12:58:59 +0000, language_fan thusly wrote:

 I even think that if your OS supports page
 files, it doesn't matter if the memory is on chip or disk - if the
 memory is split 3/1 for processes and kernel, you should be able to
 allocate 2 GB easily.

For example my system has 5GB of virtual memory reported by the OS, and 3.5GB free virtual space, 1.6GB free physical RAM. The OS provides a memory space of 3GB for each process. Based on some testing C allows allocating a maximum amount of 2810MB RAM with malloc.
Aug 11 2009
prev sibling next sibling parent Frank Benoit <keinfarbton googlemail.com> writes:
bearophile schrieb:
 In a small program on Windows XP I have to allocate a large chunk of
 RAM, about 1847 MB of RAM. This PC has 2 GB RAM. So I use
 std.c.stdio.malloc(), with DMD v1.042 (or v2.031). But it's not able
 to allocate it, and produces at runtime: Error: Access Violation
 
 An equal program written in C and compiled with GCC, that allocates
 the same amount of memory using malloc, is able to run (with just a
 little hard disk trashing at the beginning). Do you know why DMD
 doesn't allow me to allocate more RAM, and can this be fixed?
 
 Bye, bearophile

Is it the malloc that fails (returning null) or the handling of the block? D arrays afaik can only handle 16e6 elements.
Aug 09 2009
prev sibling parent reply div0 <div0 users.sourceforge.net> writes:
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

http://www.digitalmars.com/d/archives/digitalmars/D/OPTLINK_and_LARGEADDRESSAWARE_88061.html

bearophile wrote:
 In a small program on Windows XP I have to allocate a large chunk of RAM,
about 1847 MB of RAM. This PC has 2 GB RAM. So I use std.c.stdio.malloc(), with
DMD v1.042 (or v2.031). But it's not able to allocate it, and produces at
runtime:
 Error: Access Violation
 
 An equal program written in C and compiled with GCC, that allocates the same
amount of memory using malloc, is able to run (with just a little hard disk
trashing at the beginning).
 Do you know why DMD doesn't allow me to allocate more RAM, and can this be
fixed?
 
 Bye,
 bearophile

- -- My enormous talent is exceeded only by my outrageous laziness. http://www.ssTk.co.uk -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.7 (MingW32) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ iD8DBQFKgHMgT9LetA9XoXwRAv1VAJ0THPYbgGMuQ4+yjVI0vsnT1fuBJgCeLgim qg7qFJXZ+ZOmmtc29Jpdjts= =W6kx -----END PGP SIGNATURE-----
Aug 10 2009
parent bearophile <bearophileHUGS lycos.com> writes:
Jeremie Pelletier:

I can't even think of an use for such large arrays. There is always a way to
split the allocation in smaller ones which will be easy to map in the available
virtual memory space.<

You not being able to imagine a good use case doesn't imply there isn't one :-) This program allocates a very large tree in a linear data structure. The result is a much higher performance, compared to the usual tree allocated with sparse structures, or even compared to structures structures taken from one or more freelists allocated as arrays of contiguous structs. The C code runs in seconds instead of minutes, even considering a bit of hard disk trashing at the beginning. Modern CPUs love arrays much more than in the past. Linked data structures are now getting obsolete. ------------------- div0:
 http://www.digitalmars.com/d/archives/digitalmars/D/OPTLINK_and_LARGEADDRESSAWARE_88061.html

Thank you very much, I think the case is closed. Bye, bearophile
Aug 10 2009