www.digitalmars.com         C & C++   DMDScript  

digitalmars.D.bugs - [Issue 15854] New: Intrinsic sin function uses buggy hardware fsin

https://issues.dlang.org/show_bug.cgi?id=15854

          Issue ID: 15854
           Summary: Intrinsic sin function uses buggy hardware fsin
                    instruction
           Product: D
           Version: D2
          Hardware: x86_64
                OS: All
            Status: NEW
          Severity: normal
          Priority: P1
         Component: dmd
          Assignee: nobody puremagic.com
          Reporter: hsteoh quickfur.ath.cx

It appears that std.math.sin on Intel platforms is compiled into basically a
wrapper around the hardware fsin instruction, which is unfortunately rather
inaccurate.

See:
https://randomascii.wordpress.com/2014/10/09/intel-underestimates-error-bounds-by-1-3-quintillion/

Using the code adapted from the above article:

-------
import std.math;
import std.stdio;

ulong representation(double d)
{
        union U { double ud; ulong ul; }
        auto u = U(d);
        return u.ul;
}

void main() {
        double pi_d = 3.14159265358979323846;
        writefln("...............0.12345678901234567890");
        writefln("double.dig = %d", double.dig);
        writefln("real.dig = %d", real.dig);
        writefln("          pi = %.33f", pi_d);
        writefln("             + %.33f", sin(pi_d));
        writefln(" std.math.PI = %.33f", PI);
        writefln("actual value = 3.141592653589793238462643383279502(8)...");
        writefln("sin(pi_d) = 0x%X", sin(pi_d).representation);
}
-------

(The actual value line is obtained from
http://www.math.com/tables/constants/pi.htm, which can be cross-checked with
other online sources for the digits of pi.)

Here's the output:
-------
...............0.12345678901234567890
double.dig = 15
real.dig = 18
          pi = 3.141592653589793115997963468544185
             + 0.000000000000000122460635382237726
 std.math.PI = 3.141592653589793238512808959406186
actual value = 3.141592653589793238462643383279502(8)...
sin(pi_d) = 0x3CA1A60000000000
-------

The first line of the output is basically a convenient ruler for easy reference
for digit position.

The equivalent C program, as a comparison:
--------
#include <stdio.h>
#include <math.h>

union U { double ud; unsigned long ul; };

unsigned long representation(double d)
{
        union U u = { d };
        return u.ul;
}

int main() {
        double pi_d = 3.14159265358979323846;
        printf("...............0.1234567890123456789012345678901234567890\n");
        printf("          pi = %.33f\n", pi_d);
        printf("             + %.33f\n", sin(pi_d));
        printf("        M_PI = %.33f\n", M_PI);
        printf("actual value = 3.141592653589793238462643383279502(8)...\n");
        printf("sin(pi_d) = 0x%lX\n", representation(sin(pi_d)));
}
--------

The output (compiled with gcc 5.3.1, glibc 2.22) is:
--------
...............0.1234567890123456789012345678901234567890
          pi = 3.141592653589793115997963468544185
             + 0.000000000000000122464679914735321
        M_PI = 3.141592653589793115997963468544185
actual value = 3.141592653589793238462643383279502(8)...
sin(pi_d) = 0x3CA1A62633145C07
--------

Comparing the output of the C program vs. the D program, we see that in the D
program sin(pi_d) is truncated after 6 hex digits, just as the above linked
article says, as the result of the inaccuracy of the fsin instruction
(disassembly of the executable confirms that fsin is being used).  The
equivalent C code gives the value as 0x3CA1A62633145C07 instead, which is more
like the value of a transcendental function.

Manually adding the digits above (because this is beyond the precision of
double or even real) shows that the C output correctly adds up to 33 digits of
pi, whereas the D output adds up to only 19 correct digits (20 including the
first '3').

As an aside, M_PI as defined by C's math.h gives 15 correct digits of pi
because it uses double precision, whereas std.math.PI gives 18 correct digits
because it uses real (80-bit) precision.

Shouldn't we be using the C library version of sin() (esp. since glibc seems to
have fixed the problem by using a software implementation of sin) instead of
the faulty fsin instruction?

--
Mar 30 2016