www.digitalmars.com         C & C++   DMDScript  

digitalmars.D.learn - real.sizeof

reply Dom DiSc <dominikus scherkl.de> writes:
Why is real.sizeof == 16 on x86-systems?!?
Its the IEEE 754 extended format: 64bit mantissa + 15bit exponent 
+ sign.
It should be size 10!
I mean, alignment may be different, but why wasting so much 
memory even in arrays?
Feb 05
next sibling parent Paul Backus <snarwin gmail.com> writes:
On Monday, 5 February 2024 at 16:45:03 UTC, Dom DiSc wrote:
 Why is real.sizeof == 16 on x86-systems?!?
 Its the IEEE 754 extended format: 64bit mantissa + 15bit 
 exponent + sign.
 It should be size 10!
 I mean, alignment may be different, but why wasting so much 
 memory even in arrays?
According to the language spec, `real` is the ["largest floating point size available"][1]. This means that on some systems, it's actually an IEEE 754 128-bit quadruple-precision float, not an x87 80-bit extended-precision float. You can verify this by compiling the following test program: pragma(msg, "real is ", cast(int) real.sizeof*8, " bits"); pragma(msg, "real has a ", real.mant_dig, "-bit mantissa"); On my laptop (Linux, x86_64), compiling this program with `dmd -c` prints real is 128 bits real has a 64-bit mantissa [1]: https://dlang.org/spec/type.html#basic-data-types
Feb 05
prev sibling parent reply Iain Buclaw <ibuclaw gdcproject.org> writes:
On Monday, 5 February 2024 at 16:45:03 UTC, Dom DiSc wrote:
 Why is real.sizeof == 16 on x86-systems?!?
 Its the IEEE 754 extended format: 64bit mantissa + 15bit 
 exponent + sign.
 It should be size 10!
 I mean, alignment may be different, but why wasting so much 
 memory even in arrays?
Padding. x86 ABI prefers things to be aligned, so on x86 it's 12 bytes, x86_64 16 bytes. In both cases you don't get any extra precision over the 80-bits that x87 gives you.
Feb 05
parent Dom DiSc <dominikus scherkl.de> writes:
On Monday, 5 February 2024 at 17:28:38 UTC, Iain Buclaw wrote:
 Padding.

 x86 ABI prefers things to be aligned, so on x86 it's 12 bytes, 
 x86_64 16 bytes.  In both cases you don't get any extra 
 precision over the 80-bits that x87 gives you.
This is exactly what I mean. The ABI may pad it, but sizeof should still give the number of bytes that are really used (not counting the gaps). Or is there a way to change the alignment of basic types? In my code I wanted do decide if the processor uses double-extended or quadruple as real depending on the sizeof. But now I learned I cannot rely on this. Fortunately there is mant_dig, which gives the correct info. At least in an array of real I would expect no padding, like in an array of bool (except for odd length).
Feb 05