www.digitalmars.com         C & C++   DMDScript  

digitalmars.D.learn - Beginner memory question.

reply WhatMeWorry <kheaser gmail.com> writes:
I'm playing around with dynamic arrays and I wrote the tiny 
program (at bottom). I get the following output:

PS C:\D\sandbox> dmd -m64 maxMem.d
PS C:\D\sandbox> .\maxMem.exe
Reserving 1,610,613,245 elements
reserve() returned a size of: 1,610,613,245
The capacity() of big is 1,610,613,245
ulong.sizeof (num bytes) = 8
Total bytes allocated = 12,884,905,960
Total megabytes allocated = 12,288
Total gigabytes allocated = 12


The discrepancy occurs because my Windows 10 computer only has 
8.0 GB of memory (and that is not even taking the OS into 
consideration).  Are my mega and giga sizes wrong?  Is virtual 
memory entering into the equation?


import std.stdio, std.array, std.algorithm;
import std.format;
import core.exception;

void main()
{
     ulong[] big;

     // reserve returns the new capacity of the array
     ulong e = 1_610_613_245;   // 1_610_613_246 returns out of 
memory error
     writeln("Reserving ", format("%,3d", e) ," elements");
     auto u = big.reserve(e);
	
     writeln("reserve() returned a size of: ", format("%,3d", u) );
	
     writeln("The capacity() of big is ", format("%,3d", 
big.capacity));	
     writeln("ulong.sizeof (num bytes) = ", ulong.sizeof);
     writeln("Total bytes allocated = ", format("%,3d", e * 
ulong.sizeof));

     immutable ulong megabyte = 1_048_576;    // (1024 x 1024)
     immutable ulong gigabyte = 1024 * 1024 * 1024;
	
     writeln("Total megabytes allocated = ", format("%,3d", (e * 
ulong.sizeof)/megabyte));
     writeln("Total gigabytes allocated = ", format("%,3d", (e * 
ulong.sizeof)/gigabyte));
  }
Apr 16 2022
parent reply Adam Ruppe <destructionator gmail.com> writes:
On Saturday, 16 April 2022 at 20:41:25 UTC, WhatMeWorry wrote:
 Is virtual memory entering into the equation?
Probably. Memory allocated doesn't physically exist until written to a lot of the time.
Apr 16 2022
next sibling parent reply bauss <jj_1337 live.dk> writes:
On Saturday, 16 April 2022 at 20:48:15 UTC, Adam Ruppe wrote:
 On Saturday, 16 April 2022 at 20:41:25 UTC, WhatMeWorry wrote:
 Is virtual memory entering into the equation?
Probably. Memory allocated doesn't physically exist until written to a lot of the time.
You can also exceed your RAM in a lot of cases, as the OS will just start using your disk for RAM instead, so just because you have 8 GB of ram doesn't always mean you can only use 8 GM of RAM (in theory of course.)
Apr 19 2022
parent "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Tue, Apr 19, 2022 at 12:54:06PM +0000, bauss via Digitalmars-d-learn wrote:
 On Saturday, 16 April 2022 at 20:48:15 UTC, Adam Ruppe wrote:
 On Saturday, 16 April 2022 at 20:41:25 UTC, WhatMeWorry wrote:
 Is virtual memory entering into the equation?
Probably. Memory allocated doesn't physically exist until written to a lot of the time.
You can also exceed your RAM in a lot of cases, as the OS will just start using your disk for RAM instead, so just because you have 8 GB of ram doesn't always mean you can only use 8 GM of RAM (in theory of course.)
In practice, having your program use more RAM than you have causes the OS to start thrashing on I/O as it scrambles to load/unload pages from the cache as your code accesses that (virtual) memory. Other programs get swapped out, everything slows down to a crawl, and your harddrive lifetime decreases by a couple of months (or maybe a year or two). So yeah, in theory definitely possible (and in fact workable as long as each individual program's entire working set can fit in RAM at the same time -- the OS can then swap out the other programs while that one program runs), but not something you want to push. Performance will slow down to an unusable crawl, and you may need a hard reboot if you don't want to spend the rest of the day waiting for the I/O thrashing to catch up with itself. Not worth it. T -- Chance favours the prepared mind. -- Louis Pasteur
Apr 19 2022
prev sibling parent reply Era Scarecrow <rtcvb32 yahoo.com> writes:
On Saturday, 16 April 2022 at 20:48:15 UTC, Adam Ruppe wrote:
 On Saturday, 16 April 2022 at 20:41:25 UTC, WhatMeWorry wrote:
 Is virtual memory entering into the equation?
Probably. Memory allocated doesn't physically exist until written to a lot of the time.
This might be very much an OS implementation issue. In linux using zram i've allocated and made a compressed drive of 8Gb which took only 200k of space (*the data i needed to extract compresses very well and only be temporarily used*) as such saying i have said space even though i have only 4Gb of ram didn't seem to matter. All unallocated pages are assumed null/zero filled, and if you zeroize a block it will unallocate the space. Makes extracting memory bomb archives (*Terabytes of zeroized files to fill space*) becomes rather safe in that environment. I would think if it's a small space (*say 32Mb or under, or some percentage like less than 1% of available memory*) it would allocate the memory and immediately return it. If it's larger it may say it allocated a range of memory (*as long as RAM+VM could hold it*) and allocate as needed. The CPU issues page faults when you try to access unallocated memory or memory that's not in at the time and passes it to a handler; It would then allocate the page(*s*) and then resume as though it was always allocated (*alternatively suspend until it has free ram, or save the program to disk for later resuming if there's no open ports/writable-files, or just crash the program with a segment fault*). It will make some things faster, and other things slower. If it tries to allocate all memory all at once, it may fill up RAM, then swap pages out, then fill RAM up again until the said space is successful. Which could be wasteful and slow. Or maybe it will allocate/reserve necessary Swap space and then allocate as much memory as it can before returning to the process. When you run out of ram and there's tons of swapping, a fast computer can turn into a brick for several minutes for the simplest of commands, at which changing swap settings can improve things.
Apr 19 2022
parent "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Tue, Apr 19, 2022 at 05:01:15PM +0000, Era Scarecrow via Digitalmars-d-learn
wrote:
[...]
 In linux using zram i've allocated and made a compressed drive of 8Gb
 which took only 200k of space [...] All unallocated pages are assumed
 null/zero filled, and if you zeroize a block it will unallocate the
 space. Makes extracting memory bomb archives (*Terabytes of zeroized
 files to fill space*) becomes rather safe in that environment.
[...] Don't be too confident about the safety of extracting memory bomb archives. All the attacker has to do is to make an archive of gigantic files containing 1's instead... The repeated bytes will make it compress with very high ratio (likely the same ratio as zeroes), but when extracting, the kernel will not be able to optimize away pages filled with 1's. T -- Skill without imagination is craftsmanship and gives us many useful objects such as wickerwork picnic baskets. Imagination without skill gives us modern art. -- Tom Stoppard
Apr 19 2022