www.digitalmars.com         C & C++   DMDScript  

digitalmars.D.learn - Handling big FP numbers

reply Murilo <murilomiranda92 hotmail.com> writes:
Why is it that in C when I attribute the number 
99999912343000007654329925.7865 to a double it prints 
99999912342999999470108672.0000 and in D it prints 
99999912342999999000000000.0000 ? Apparently both languages cause 
a certain loss of precision(which is part of converting the 
decimal system into the binary system) but why is it that the 
results are different?
Feb 08 2019
next sibling parent reply Adam D. Ruppe <destructionator gmail.com> writes:
On Saturday, 9 February 2019 at 02:12:29 UTC, Murilo wrote:
 prints
Two likely reasons: the D compiler does compile time stuff at 80 bit, whereas the C++ one probably uses 64 bit, and the D default print rounds more aggressively than default C++ printing. It is useful to put stuff in runtime variables of type `double` to avoid the differnt bit stuff, and print with printf in both languages to ensure that is the same.
Feb 08 2019
parent reply Murilo <murilomiranda92 hotmail.com> writes:
On Saturday, 9 February 2019 at 02:42:09 UTC, Adam D. Ruppe wrote:
 On Saturday, 9 February 2019 at 02:12:29 UTC, Murilo wrote:
 prints
Two likely reasons: the D compiler does compile time stuff at 80 bit, whereas the C++ one probably uses 64 bit, and the D default print rounds more aggressively than default C++ printing. It is useful to put stuff in runtime variables of type `double` to avoid the differnt bit stuff, and print with printf in both languages to ensure that is the same.
Hi, thanks for the information. But I used the type double in D which is supposed to be only 64 bits long and not 80 bits long, the type real is the one which is supposed to be 80 bits long. Right?
Feb 08 2019
parent reply Adam D. Ruppe <destructionator gmail.com> writes:
On Saturday, 9 February 2019 at 02:46:52 UTC, Murilo wrote:
 But I used the type double in D which is supposed to be only 64 
 bits long and not 80 bits long, the type real is the one which 
 is supposed to be 80 bits long. Right?
Right, BUT the compile time stuff is done before that. double a = 1.0 * 2.0; The 1.0*2.0 is done as real inside the compiler, then the *result* is assigned to the 64 bit double. Whereas in a C++ compiler, that would be done 64 bit throughout. So the different intermediate rounding can give a different result. (The `real` thing in D was a massive mistake. It is slow and adds nothing but confusion.)
Feb 08 2019
next sibling parent Murilo <murilomiranda92 hotmail.com> writes:
On Saturday, 9 February 2019 at 02:54:18 UTC, Adam D. Ruppe wrote:
 On Saturday, 9 February 2019 at 02:46:52 UTC, Murilo wrote:
 But I used the type double in D which is supposed to be only 
 64 bits long and not 80 bits long, the type real is the one 
 which is supposed to be 80 bits long. Right?
Right, BUT the compile time stuff is done before that. double a = 1.0 * 2.0; The 1.0*2.0 is done as real inside the compiler, then the *result* is assigned to the 64 bit double. Whereas in a C++ compiler, that would be done 64 bit throughout. So the different intermediate rounding can give a different result. (The `real` thing in D was a massive mistake. It is slow and adds nothing but confusion.)
ahhh okay, thanks for clearing it up. is there a way to bypass that?
Feb 08 2019
prev sibling next sibling parent "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Sat, Feb 09, 2019 at 02:54:18AM +0000, Adam D. Ruppe via Digitalmars-d-learn
wrote:
[...]
 (The `real` thing in D was a massive mistake. It is slow and adds
 nothing but confusion.)
Yeah, it is also the only variable-width built-in type in D, which makes it a wart in an otherwise elegant system of fixed-width types. And 80-bit extended precision is non-standard and non-conformant to IEEE 754, and who other than Intel engineers can tell how it behaves in corner cases? It also causes std.math to make a laughing stock of D, because the majority of math functions implicitly convert to real and cast the result back to the source type, thus representing a hidden performance cost even if the caller explicitly used float precisely to reduce the cost of computing at a higher precision. And it prevents the optimizer from, e.g., taking advantage of SSE instructions for faster computation with float/double. T -- If lightning were to ever strike an orchestra, it'd always hit the conductor first.
Feb 08 2019
prev sibling parent Guillaume Piolat <first.last gmail.com> writes:
On Saturday, 9 February 2019 at 02:54:18 UTC, Adam D. Ruppe wrote:
 (The `real` thing in D was a massive mistake. It is slow and 
 adds nothing but confusion.)
We've had occasional problems with `real` being 80-bit on FPU giving more precision than asked, and effectively hiding 32-bit float precision problems until run on SSE. Not a big deal, but I would argue giving more precision than asked is a form of Postel's law: a bad idea.
Feb 12 2019
prev sibling parent reply "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Sat, Feb 09, 2019 at 02:12:29AM +0000, Murilo via Digitalmars-d-learn wrote:
 Why is it that in C when I attribute the number
 99999912343000007654329925.7865 to a double it prints
 99999912342999999470108672.0000 and in D it prints
 99999912342999999000000000.0000 ? Apparently both languages cause a
 certain loss of precision(which is part of converting the decimal
 system into the binary system) but why is it that the results are
 different?
It's not only because of converting decimal to binary, it's also because double only has 64 bits to store information, and your number has far more digits than can possibly fit into 64 bits. Some number of bits are used up for storing the sign and exponent, so `double` really can only store approximately 15 decimal digits. Anything beyond that simply doesn't exist in a `double`, so attempting to print that many digits will inevitably produce garbage trailing digits. If you round the above outputs to about 15 digits, you'll see that they are essentially equal. As to why different output is produced, that's probably just an artifact of different printing algorithms used to output the number. There may be a small amount of difference around or after the 15th digit because D implicitly converts to `real` (which on x86 is the 80-bit proprietary FPU representation) for some operations and rounds it back, while C only operates on 64-bit double. This may cause some slight difference in behaviour around the 15th digit or so. Either way, it is not possible for `double` to hold more than 15 decimal digits, and any output produced from the non-existent digits following that is suspect and probably just random garbage. If you want to hold more than 15 digits, you'll either have to use `real`, which depending on your CPU will be 80-bit (x86) or 128-bit (a few newer, less common CPUs), or an arbitrary-precision library that simulates larger precisions in software, like the MPFR module of libgmp. Note, however, that even even 80-bit real realistically only holds up to about 18 digits, which isn't very much more than a double, and still far too small for your number above. You need at least a 128-bit quadruple precision type (which can represent up to about 34 digits) in order to represent your above number accurately. T -- The fact that anyone still uses AOL shows that even the presence of options doesn't stop some people from picking the pessimal one. - Mike Ellis
Feb 08 2019
next sibling parent DanielG <simpletangent gmail.com> writes:
On Saturday, 9 February 2019 at 03:03:41 UTC, H. S. Teoh wrote:
 It's not only because of converting decimal to binary, it's 
 also because double only has 64 bits to store information, and 
 your number has far more digits than can possibly fit into 64 
 bits.
Adding to that, here's a nice little online calculator that will show the binary representation of a given decimal number: https://www.h-schmidt.net/FloatConverter/IEEE754.html
Feb 08 2019
prev sibling next sibling parent reply Murilo <murilomiranda92 hotmail.com> writes:
On Saturday, 9 February 2019 at 03:03:41 UTC, H. S. Teoh wrote:
 On Sat, Feb 09, 2019 at 02:12:29AM +0000, Murilo via 
 Digitalmars-d-learn wrote:
 Why is it that in C when I attribute the number 
 99999912343000007654329925.7865 to a double it prints 
 99999912342999999470108672.0000 and in D it prints 
 99999912342999999000000000.0000 ? Apparently both languages 
 cause a certain loss of precision(which is part of converting 
 the decimal system into the binary system) but why is it that 
 the results are different?
It's not only because of converting decimal to binary, it's also because double only has 64 bits to store information, and your number has far more digits than can possibly fit into 64 bits. Some number of bits are used up for storing the sign and exponent, so `double` really can only store approximately 15 decimal digits. Anything beyond that simply doesn't exist in a `double`, so attempting to print that many digits will inevitably produce garbage trailing digits. If you round the above outputs to about 15 digits, you'll see that they are essentially equal. As to why different output is produced, that's probably just an artifact of different printing algorithms used to output the number. There may be a small amount of difference around or after the 15th digit because D implicitly converts to `real` (which on x86 is the 80-bit proprietary FPU representation) for some operations and rounds it back, while C only operates on 64-bit double. This may cause some slight difference in behaviour around the 15th digit or so.
Now, changing a little bit the subject. All FPs in D turn out to be printed differently than they are in C and in C it comes out a little more precise than in D. Is this really supposed to happen?
Feb 08 2019
parent reply Adam D. Ruppe <destructionator gmail.com> writes:
On Saturday, 9 February 2019 at 03:21:51 UTC, Murilo wrote:
 Now, changing a little bit the subject. All FPs in D turn out 
 to be printed differently than they are in C and in C it comes 
 out a little more precise than in D. Is this really supposed to 
 happen?
Like I said in my first message, the D default rounds off more than the C default. This usually results in more readable stuff - the extra noise at the end is not that helpful in most cases. But you can change this with the format specifiers (use `writefln` instead of `writeln` and give a precision argument) or, of course, you can use the same C printf function from D.
Feb 08 2019
next sibling parent reply Murilo <murilomiranda92 hotmail.com> writes:
On Saturday, 9 February 2019 at 03:28:24 UTC, Adam D. Ruppe wrote:
 On Saturday, 9 February 2019 at 03:21:51 UTC, Murilo wrote:
 Now, changing a little bit the subject. All FPs in D turn out 
 to be printed differently than they are in C and in C it comes 
 out a little more precise than in D. Is this really supposed 
 to happen?
Like I said in my first message, the D default rounds off more than the C default. This usually results in more readable stuff - the extra noise at the end is not that helpful in most cases. But you can change this with the format specifiers (use `writefln` instead of `writeln` and give a precision argument) or, of course, you can use the same C printf function from D.
Thanks but here is the situation, I use printf("%.20f", 0.1); in both C and D, C returns 0.10000000000000000555 whereas D returns 0.10000000000000001000. So I understand your point, D rounds off more, but that causes loss of precision, isn't that something bad if you are working with math and physics for example?
Feb 08 2019
parent reply DanielG <simpletangent gmail.com> writes:
On Saturday, 9 February 2019 at 03:33:13 UTC, Murilo wrote:
 Thanks but here is the situation, I use printf("%.20f", 0.1); 
 in both C and D, C returns 0.10000000000000000555 whereas D 
 returns 0.10000000000000001000. So I understand your point, D 
 rounds off more, but that causes loss of precision, isn't that 
 something bad if you are working with math and physics for 
 example?
0.1 in floating point is actually 0.100000001490116119384765625 behind the scenes. So why is it important that it displays as: 0.10000000000000000555 versus 0.10000000000000001000 ? *Technically* the D version has less error, relative to the internal binary representation. Since there's no exact way of representing 0.1 in floating point, the computer has no way of knowing you really mean "0.1 decimal". If the accuracy is that important to you, you'll probably have to look into software-only number representations, for arbitrary decimal precision (I've not explored them in D, but other languages have things like "BigDecimal" data types)
Feb 08 2019
parent reply Murilo <murilomiranda92 hotmail.com> writes:
On Saturday, 9 February 2019 at 04:30:22 UTC, DanielG wrote:
 On Saturday, 9 February 2019 at 03:33:13 UTC, Murilo wrote:
 Thanks but here is the situation, I use printf("%.20f", 0.1); 
 in both C and D, C returns 0.10000000000000000555 whereas D 
 returns 0.10000000000000001000. So I understand your point, D 
 rounds off more, but that causes loss of precision, isn't that 
 something bad if you are working with math and physics for 
 example?
0.1 in floating point is actually 0.100000001490116119384765625 behind the scenes. So why is it important that it displays as: 0.10000000000000000555 versus 0.10000000000000001000 ? *Technically* the D version has less error, relative to the internal binary representation. Since there's no exact way of representing 0.1 in floating point, the computer has no way of knowing you really mean "0.1 decimal". If the accuracy is that important to you, you'll probably have to look into software-only number representations, for arbitrary decimal precision (I've not explored them in D, but other languages have things like "BigDecimal" data types)
Thank you very much for clearing this up. But how do I make D behave just like C? Is there a way to do that?
Feb 08 2019
parent reply DanielG <simpletangent gmail.com> writes:
On Saturday, 9 February 2019 at 04:32:44 UTC, Murilo wrote:
 Thank you very much for clearing this up. But how do I make D 
 behave just like C? Is there a way to do that?
Off the top of my head, you'd have to link against libc so you could use printf() directly. But may I ask why that is so important to you?
Feb 08 2019
next sibling parent Murilo <murilomiranda92 hotmail.com> writes:
On Saturday, 9 February 2019 at 04:36:26 UTC, DanielG wrote:
 On Saturday, 9 February 2019 at 04:32:44 UTC, Murilo wrote:
 Thank you very much for clearing this up. But how do I make D 
 behave just like C? Is there a way to do that?
Off the top of my head, you'd have to link against libc so you could use printf() directly. But may I ask why that is so important to you?
That is important because there are some websites with problems for you to send them your code and it automatically checks if you got it right, in D it will always say "Wrong Answer" cause it outputs a different string than what is expected.
Feb 08 2019
prev sibling parent "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Sat, Feb 09, 2019 at 04:36:26AM +0000, DanielG via Digitalmars-d-learn wrote:
 On Saturday, 9 February 2019 at 04:32:44 UTC, Murilo wrote:
 Thank you very much for clearing this up. But how do I make D
 behave just like C? Is there a way to do that?
Off the top of my head, you'd have to link against libc so you could use printf() directly.
There's no need to do that, D programs already link to libc by default. All you need is to declare the right extern(C) prototype (or just import core.stdc.*) and you can call the function just like in C. With the caveat, of course, that D strings are not the same as C's char*, so you have to use .ptr for string literals and toStringz for dynamic strings. T -- People tell me I'm stubborn, but I refuse to accept it!
Feb 08 2019
prev sibling parent reply Murilo <murilomiranda92 hotmail.com> writes:
On Saturday, 9 February 2019 at 03:28:24 UTC, Adam D. Ruppe wrote:
 On Saturday, 9 February 2019 at 03:21:51 UTC, Murilo wrote:
 Now, changing a little bit the subject. All FPs in D turn out 
 to be printed differently than they are in C and in C it comes 
 out a little more precise than in D. Is this really supposed 
 to happen?
Like I said in my first message, the D default rounds off more than the C default. This usually results in more readable stuff - the extra noise at the end is not that helpful in most cases. But you can change this with the format specifiers (use `writefln` instead of `writeln` and give a precision argument) or, of course, you can use the same C printf function from D.
Thanks, but which precision specifier should I use? I already tried "%.20f" but it still produces a result different from C? Is there a way to make it identical to C?
Feb 08 2019
parent reply "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Sat, Feb 09, 2019 at 03:52:38AM +0000, Murilo via Digitalmars-d-learn wrote:
 On Saturday, 9 February 2019 at 03:28:24 UTC, Adam D. Ruppe wrote:
[...]
 But you can change this with the format specifiers (use `writefln`
 instead of `writeln` and give a precision argument) or, of course,
 you can use the same C printf function from D.
Thanks, but which precision specifier should I use? I already tried "%.20f" but it still produces a result different from C? Is there a way to make it identical to C?
I say again, you're asking for far more digits than are actually there. The extra digits are only an illusion of accuracy; they are essentially garbage values that have no real meaning. It really does not make any sense to print more digits than are actually there. On the other hand, if your purpose is to export the double in a textual representation that guarantees reproduction of exactly the same bits when imported later, you could use the %a format which writes out the mantissa in exact hexadecimal form. It can then be parsed again later to reproduce the same bits as was in the original double. Or if your goal is simply to replicate C behaviour, there's also the option of calling the C fprintf directly. Just import core.stdc.stdio or declare the prototype yourself in an extern(C) declaration. T -- If it breaks, you get to keep both pieces. -- Software disclaimer notice
Feb 08 2019
parent reply Murilo <murilomiranda92 hotmail.com> writes:
On Saturday, 9 February 2019 at 05:46:22 UTC, H. S. Teoh wrote:
 On Sat, Feb 09, 2019 at 03:52:38AM +0000, Murilo via 
 Digitalmars-d-learn wrote:
 On Saturday, 9 February 2019 at 03:28:24 UTC, Adam D. Ruppe 
 wrote:
[...]
 But you can change this with the format specifiers (use 
 `writefln` instead of `writeln` and give a precision 
 argument) or, of course, you can use the same C printf 
 function from D.
Thanks, but which precision specifier should I use? I already tried "%.20f" but it still produces a result different from C? Is there a way to make it identical to C?
I say again, you're asking for far more digits than are actually there. The extra digits are only an illusion of accuracy; they are essentially garbage values that have no real meaning. It really does not make any sense to print more digits than are actually there. On the other hand, if your purpose is to export the double in a textual representation that guarantees reproduction of exactly the same bits when imported later, you could use the %a format which writes out the mantissa in exact hexadecimal form. It can then be parsed again later to reproduce the same bits as was in the original double. Or if your goal is simply to replicate C behaviour, there's also the option of calling the C fprintf directly. Just import core.stdc.stdio or declare the prototype yourself in an extern(C) declaration. T
Thanks, but even using core.stdc.stdio.fprintf() it still shows a different result on the screen. It seems this is a feature of D I will have to get used to and accept the fact I can't always get the same number as in C even though in the end it doesn't make a difference cause as you explained the final digits are just garbage anyway.
Feb 10 2019
parent reply Dennis <dkorpel gmail.com> writes:
On Sunday, 10 February 2019 at 20:25:02 UTC, Murilo wrote:
 It seems this is a feature of D I will have to get used to and 
 accept the fact I can't always get the same number as in C
What compilers and settings do you use? What you're actually comparing here are different implementations of the C runtime library. ``` #include<stdio.h> int main() { double a = 99999912343000007654329925.7865; printf("%f\n", a); return 0; } ``` I compiled this with different C compilers on Windows 10 and found: DMC: 99999912342999999472000000.000000 GCC: 99999912342999999000000000.000000 CLANG: 99999912342999999470108672.000000 As for D: ``` import core.stdc.stdio: printf; int main() { double a = 99999912343000007654329925.7865; printf("%f\n", a); return 0; } ``` DMD: 99999912342999999472000000.000000 LDC: 99999912342999999470108672.000000 DMC and DMD both use the Digital Mars runtime by default. I think CLANG and LDC use the static libcmt by default while GCC uses the dynamic msvcrt.dll (not sure about the exact one, but evidently it's different). So it really hasn't anything to do with D vs. C but rather what C runtime you use.
Feb 10 2019
parent Murilo <murilomiranda92 hotmail.com> writes:
On Sunday, 10 February 2019 at 21:27:43 UTC, Dennis wrote:
 On Sunday, 10 February 2019 at 20:25:02 UTC, Murilo wrote:
 It seems this is a feature of D I will have to get used to and 
 accept the fact I can't always get the same number as in C
What compilers and settings do you use? What you're actually comparing here are different implementations of the C runtime library. ``` #include<stdio.h> int main() { double a = 99999912343000007654329925.7865; printf("%f\n", a); return 0; } ``` I compiled this with different C compilers on Windows 10 and found: DMC: 99999912342999999472000000.000000 GCC: 99999912342999999000000000.000000 CLANG: 99999912342999999470108672.000000 As for D: ``` import core.stdc.stdio: printf; int main() { double a = 99999912343000007654329925.7865; printf("%f\n", a); return 0; } ``` DMD: 99999912342999999472000000.000000 LDC: 99999912342999999470108672.000000 DMC and DMD both use the Digital Mars runtime by default. I think CLANG and LDC use the static libcmt by default while GCC uses the dynamic msvcrt.dll (not sure about the exact one, but evidently it's different). So it really hasn't anything to do with D vs. C but rather what C runtime you use.
Ahhh, alright, I was not aware of the fact that there are different runtimes, thanks for clearing this up.
Feb 10 2019
prev sibling parent reply =?UTF-8?B?QXVyw6lsaWVu?= Plazzotta <cmoi gmail.com> writes:
On Saturday, 9 February 2019 at 03:03:41 UTC, H. S. Teoh wrote:

 If you want to hold more than 15 digits, you'll either have to 
 use `real`, which depending on your CPU will be 80-bit (x86) or 
 128-bit (a few newer, less common CPUs), or an 
 arbitrary-precision library that simulates larger precisions in 
 software, like the MPFR module of libgmp. Note, however, that 
 even even 80-bit real realistically only holds up to about 18 
 digits, which isn't very much more than a double, and still far 
 too small for your number above.  You need at least a 128-bit 
 quadruple precision type (which can represent up to about 34 
 digits) in order to represent your above number accurately.


 T
Thank you both for your lesson Adam D. Ruppe and H.S. Teoh. Is there a wish or someone showing one's intention to implement into the language the hypothetical built-in 128 bit types via the "cent" and "ucent" reserved keywords?
Feb 12 2019
parent Simen =?UTF-8?B?S2rDpnLDpXM=?= <simen.kjaras gmail.com> writes:
On Tuesday, 12 February 2019 at 09:20:27 UTC, Aurélien Plazzotta 
wrote:
 Thank you both for your lesson Adam D. Ruppe and H.S. Teoh.
 Is there a wish or someone showing one's intention to implement 
 into the language the hypothetical built-in 128 bit types via 
 the "cent" and "ucent" reserved keywords?
cent and ucent would be integers with values between -170141183460469231731687303715884105728 to 170141183460469231731687303715884105727 and 0 to 340282366920938463463374607431768211455, respectively. Their implementation would not mean that any kind of 128-bit float would also be available. For bigger floating-point numbers there's at least two packages on dub: https://code.dlang.org/packages/stdxdecimal https://code.dlang.org/packages/decimal These are arbitrary-precision, and probably quite a bit slower than any built-in floats. -- Simen
Feb 12 2019