www.digitalmars.com         C & C++   DMDScript  

digitalmars.D - Remembering more than 4GB

reply Stewart Gordon <smjg_1998 yahoo.com> writes:
In the old days of 16-bit, there were near and far pointers, presumably 
to give the programmer the choice of squashing a program's data into 64K 
or being able to use more.

Now, we have the flat memory space of 32-bit.  However, its limit is 
being approached.  At least, three years ago (looking through my old 
archives) the occasional Win98 boxes were getting upgraded to 1GB of 
RAM.  So if Moore's law is anything to go by, then systems with 4GB or 
more should be coming out now.

This raises a few questions.  How will Win32 (and 32-bit OSs in general) 
work on systems with more than 4GB of RAM?  Presumably either:
(a) the system will not work at all
(b) only the first 4GB will be visible
(c) there will be some obscure workaround

Of course, even when we have 64-bit successors to the 32-bit OSs, 
they'll still need to be backward compatible with 32-bit apps.  Maybe 
these will only be able to access the first 4GB of memory, but who knows?

Now for the big one ... is D going to be ready for whatever's going to 
happen?

Stewart.
Nov 22 2004
next sibling parent =?ISO-8859-1?Q?Anders_F_Bj=F6rklund?= <afb algonet.se> writes:
Stewart Gordon wrote:

 Now for the big one ...
 is D going to be ready for whatever's going to happen?
Yes, see http://www.digitalmars.com/d/portability.html:
 32 to 64 Bit Portability
 64 bit processors and operating systems are coming. With that in mind:
 
 * Integral types will remain the same sizes between 32 and 64 bit code.
 * Pointers and object references will increase in size from 4 bytes to 8 bytes
going from 32 to 64 bit code.
 * Use size_t as an alias for an unsigned integral type that can span the
address space.
 * Use ptrdiff_t as an alias for a signed integral type that can span the
address space.
 * The .length, .size, .sizeof, and .alignof properties will be of type size_t. 
Of course, Phobos etc. also needs to be ported to 64-bit --anders
Nov 22 2004
prev sibling next sibling parent reply Ilya Minkov <minkov cs.tum.edu> writes:
Stewart Gordon schrieb:
 This raises a few questions.  How will Win32 (and 32-bit OSs in general) 
 work on systems with more than 4GB of RAM?  Presumably either:
 (a) the system will not work at all
 (b) only the first 4GB will be visible
 (c) there will be some obscure workaround
I heard of (c), just that it is not very obscure. Namely, OS can manage more memory, but each application must be mapped within a 4GB memory space. From these, the upper gigabyte or two are mapped to OS functions, and the rest is available to the application. I might be mixing something up, and this might be about 64-bit OSes proving compatibility for 32-bit applications.
 Of course, even when we have 64-bit successors to the 32-bit OSs, 
 they'll still need to be backward compatible with 32-bit apps.  Maybe 
 these will only be able to access the first 4GB of memory, but who knows?
Definately not only the first, but any, just not more than 2 or 3 Gigabyte per process. You know, applications don't access memory by its real adresses, they implicitly use the mapping specified by the OS using special CPU facilities. Ever wondered why NULL pointers provoke a segfault? Well, because the first page of application adress space is not mapped to valid memory. :)
 Now for the big one ... is D going to be ready for whatever's going to 
 happen?
As for 32-bit subsystems - yes, already. As for 64-bit ones - yes, it will be soon. And it has already happened - that sort of hardware is used for servers. There is Athlon64 and Opteron. There are large memory units. Finally, there is Linux for all that, and Windows XP AMD64 edition - btw, this version of Windows is available for free for now. I heard both OSes are capable of executing legacy code. -eye
Nov 22 2004
parent =?ISO-8859-1?Q?Anders_F_Bj=F6rklund?= <afb algonet.se> writes:
Ilya Minkov wrote:

 And it has already happened - that sort of hardware is used for servers. 
 There is Athlon64 and Opteron. There are large memory units. Finally, 
 there is Linux for all that, and Windows XP AMD64 edition - btw, this 
 version of Windows is available for free for now. I heard both OSes are 
 capable of executing legacy code.
Apple has the "G5" processor, and Mac OS X 10.4 "Tiger" is a 64-bit OS. You can get PPC64 support in both Linux or the Tiger Preview right now. The 32-bit PowerPC is a subset of the full 64-bit PowerPC architecture, which means that even 32-bit PPC applications can use 64-bit features... --anders
Nov 22 2004
prev sibling parent reply Maik Zumstrull <Maik.Zumstrull gmx.de> writes:
Stewart Gordon wrote:

 This raises a few questions.  How will Win32 (and 32-bit OSs in
 general)
 work on systems with more than 4GB of RAM?  Presumably either:
 (a) the system will not work at all
 (b) only the first 4GB will be visible
 (c) there will be some obscure workaround
Since the days of the Pentium Pro, Intel 32 bit processors have an address space expansion system that increases the physical address space to 36 bits while keeping the virtual address space at 32 bits. It works by mapping parts of the 4 GB virtual address space into the 36 bits address space. OSs can make use of these extensions today; however, Win32 and Linux take a different approach: -- In Win32, the kernel doesn't give a shit about the extended address space. Applications can be run with the special right to remap the virtual address space (a right not even Administrator accounts and SYSTEM have by default, since it's a giant security hole). The application has to manage its memory on its own. You can, of course, access the remapping API from D via C calls. However, I don't think you'd want to. It's just a very ugly solution. -- In Linux, memory remapping is handled exclusively by the kernel. Every process gets its 4 GB share. If your application needs more memory, you need to have multiple threads, each managing up to 4 GB of memory. The Kernel will take care of mapping the addresses around in the 36 bit physical address space. Note: There has been some discussion on LKML to handle HIGHMEM management differently in 2.7/2.8, by not directly mapping high memory space into applications' address space, but by using the high memory area as a (very, very fast) swap device. This would, however, not make a difference from a userspace/API point of view, it's just an internal change.
Nov 22 2004
parent larrycowan <larrycowan_member pathlink.com> writes:
Stewart Gordon wrote:

 This raises a few questions.  How will Win32 (and 32-bit OSs in
 general)
 work on systems with more than 4GB of RAM?  Presumably either:
 (a) the system will not work at all
 (b) only the first 4GB will be visible
 (c) there will be some obscure workaround
.. seems to be a handled/handleable issue - mostly by the OS unless you actually want to use more than 4GB for a single process. The more likely real problems come up when you start trying to have your programs take advantage of more core processors on a single chip. This will likely be the major future route for computer power growth. Tightly intercoupled multi-processor programming just hasn't been worked out very much beyond the obvious things like vector and matrix operations. This will probably require many new concepts, and upgrading current languages will not be a satisfactory solution. Probably though, the major experimentation to get there will take place as library redevelopments and OS extensions. D might be well situated to facilitate a lot of this. There is at least one 4-processor-on-a-chip development going on right now, and 2 is a done deal. Unless much is changed in the way we program, 8 or 16 is likely a high limit with non-symmetric processors likely, some general purpose, some planned for specific OS tasks, some like floating point registers will be designed for special application usages. The individual processors may well be slower than current ones... After this, the whole way OS's, languages, programs, communications, etc., will have to change in a big way. How would you use a 512-processor chip to solve current problems? Beyond trying to neural-net reality, what?
Nov 23 2004