www.digitalmars.com         C & C++   DMDScript  

digitalmars.D - Size of the real type

reply kinghajj <kinghajj_member pathlink.com> writes:
This is just an FYI, but on my computer, this code:

import std.stdio;

int main(char[][] args)
{
writefln(real.sizeof * 8);
return 0;
}

Outputs the size of real as 96 bits, not 80.

I have an Intel Pentium 4 (Prescott) CPU.
Mar 08 2006
next sibling parent reply "Jarrett Billingsley" <kb3ctd2 yahoo.com> writes:
"kinghajj" <kinghajj_member pathlink.com> wrote in message 
news:duo1sh$1go1$1 digitaldaemon.com...
 This is just an FYI, but on my computer, this code:
 Outputs the size of real as 96 bits, not 80.
Odd! I get 80, as I'd expect. Are you using DMD or GDC?
Mar 08 2006
parent reply kinghajj <kinghajj_member pathlink.com> writes:
In article <duoad7$1psv$1 digitaldaemon.com>, Jarrett Billingsley says...
"kinghajj" <kinghajj_member pathlink.com> wrote in message 
news:duo1sh$1go1$1 digitaldaemon.com...
 This is just an FYI, but on my computer, this code:
 Outputs the size of real as 96 bits, not 80.
Odd! I get 80, as I'd expect. Are you using DMD or GDC?
DMD in Linux. I'll try runing it in Windows to see if that makes a diference.
Mar 08 2006
parent "Unknown W. Brackets" <unknown simplemachines.org> writes:
It does.  As I recall, Walter has made past comments reflecting that the 
size of a real on Linux differs from the size of the same on Windows.  I 
believe this is for library reasons.

-[Unknown]


 In article <duoad7$1psv$1 digitaldaemon.com>, Jarrett Billingsley says...
 "kinghajj" <kinghajj_member pathlink.com> wrote in message 
 news:duo1sh$1go1$1 digitaldaemon.com...
 This is just an FYI, but on my computer, this code:
 Outputs the size of real as 96 bits, not 80.
Odd! I get 80, as I'd expect. Are you using DMD or GDC?
DMD in Linux. I'll try runing it in Windows to see if that makes a diference.
Mar 08 2006
prev sibling parent reply =?ISO-8859-1?Q?Anders_F_Bj=F6rklund?= <afb algonet.se> writes:
kinghajj wrote:

 This is just an FYI, but on my computer, this code:
 
 import std.stdio;
 
 int main(char[][] args)
 {
 writefln(real.sizeof * 8);
 return 0;
 }
Side note: Who said the size of a "real" is 80 bits ? The size varies. It's just defined as: "largest hardware implemented FP size" I get 64, here on PowerPC :-) On a SPARC, you could get 128.
 Outputs the size of real as 96 bits, not 80.
 I have an Intel Pentium 4 (Prescott) CPU.
The difference is due to alignment of the long double type. In x86 Linux, it is 96 bits. In x64 Linux, it is 128 bits... But they both still only use 80 bits, just add some padding. --anders PS. http://gcc.gnu.org/onlinedocs/gcc-4.0.2/gcc/i386-and-x86_002d64-Options.html "The i386 application binary interface specifies the size to be 96 bits, so -m96bit-long-double is the default in 32 bit mode." [...] "In the x86-64 compiler, -m128bit-long-double is the default choice as its ABI specifies that long double is to be aligned on 16 byte boundary."
Mar 08 2006
parent reply "Jarrett Billingsley" <kb3ctd2 yahoo.com> writes:
"Anders F Björklund" <afb algonet.se> wrote in message 
news:duol0f$278j$1 digitaldaemon.com...
 Side note:
 Who said the size of a "real" is 80 bits ? The size varies.
 It's just defined as: "largest hardware implemented FP size"
Which should be 80 on x86 processors!
 The difference is due to alignment of the long double type.

 In x86 Linux, it is 96 bits. In x64 Linux, it is 128 bits...
 But they both still only use 80 bits, just add some padding.

 --anders

 PS.
 http://gcc.gnu.org/onlinedocs/gcc-4.0.2/gcc/i386-and-x86_002d64-Options.html
 "The i386 application binary interface specifies the size to be 96 bits, 
 so -m96bit-long-double is the default in 32 bit mode." [...] "In the 
 x86-64 compiler, -m128bit-long-double is the default choice as its ABI 
 specifies that long double is to be aligned on 16 byte boundary."
Well if the only difference is in the alignment, why isn't just the real.alignof field affected? An x86-32 real is 80 bits, period. Or does it have to do with, say, C function name mangling? So a C function that takes one real in Windows would be _Name 80 but in Linux it'd be _Name 96 ?
Mar 09 2006
parent reply "Walter Bright" <newshound digitalmars.com> writes:
"Jarrett Billingsley" <kb3ctd2 yahoo.com> wrote in message 
news:dupgi5$g9f$2 digitaldaemon.com...
 Well if the only difference is in the alignment, why isn't just the 
 real.alignof field affected?  An x86-32 real is 80 bits, period.  Or does 
 it have to do with, say, C function name mangling?  So a C function that 
 takes one real in Windows would be _Name 80 but in Linux it'd be _Name 96 
 ?
It's 96 bits on linux because gcc on linux pretends that 80 bit reals are really 96 bits long. What the alignment is is something different again. Name mangling does not drive this, although the "Windows" calling convention will have different names as you point out, but that doesn't matter. 96 bit convention permeates linux, and since D must be C ABI compatible with the host system's default C compiler, 96 bits it is on linux. If you're looking for mantissa significant bits, etc., use the various .properties of float types.
Mar 09 2006
parent reply Don Clugston <dac nospam.com.au> writes:
Walter Bright wrote:
 "Jarrett Billingsley" <kb3ctd2 yahoo.com> wrote in message 
 news:dupgi5$g9f$2 digitaldaemon.com...
 Well if the only difference is in the alignment, why isn't just the 
 real.alignof field affected?  An x86-32 real is 80 bits, period.  Or does 
 it have to do with, say, C function name mangling?  So a C function that 
 takes one real in Windows would be _Name 80 but in Linux it'd be _Name 96 
 ?
It's 96 bits on linux because gcc on linux pretends that 80 bit reals are really 96 bits long. What the alignment is is something different again. Name mangling does not drive this, although the "Windows" calling convention will have different names as you point out, but that doesn't matter. 96 bit convention permeates linux, and since D must be C ABI compatible with the host system's default C compiler, 96 bits it is on linux. If you're looking for mantissa significant bits, etc., use the various .properties of float types.
The 128 bit convention makes some kind of sense -- it means an 80-bit real is binary compatible with the proposed IEEE quad type (it just sets the last few mantissa bits to zero). But the 96 bit case makes no sense to me at all. pragma's DDL lets you (to some extent) mix Linux and Windows .objs. Eventually, we may need some way to deal with the different padding.
Mar 10 2006
parent "Walter Bright" <newshound digitalmars.com> writes:
"Don Clugston" <dac nospam.com.au> wrote in message 
news:durcq4$2u8o$1 digitaldaemon.com...
 But the 96 bit case makes no sense to me at all.
It doesn't matter if it makes much sense or not, we're stuck with it on linux.
 pragma's DDL lets you (to some extent) mix Linux and Windows .objs. 
 Eventually, we may need some way to deal with the different padding.
I think it's a pipe dream to expect to be able to mix obj files between operating systems. The 96 bit thing is far from the only difference.
Mar 10 2006