www.digitalmars.com         C & C++   DMDScript  

digitalmars.D.learn - Why does this floating point comparison fail?

reply Jonathan M Davis <jmdavisProg gmx.com> writes:
Maybe I'm just totally missing something, but this seems really wrong to me. 
This program:

import std.stdio;

void main()
{
    writeln(2.1);
    writeln(2.1 == 2.1);
    writeln(3 * .7);
    writeln(2.1 == 3 * .7);

    auto a = 2.1;
    auto b = 3 * .7;
    writeln(a);
    writeln(b);
    writeln(a == b);
}



prints out

2.1
true
2.1
false
2.1
2.1
true


How on earth is 2.1 not equal to 3 * .7? Adding extra parens doesn't help, so 
it's not an operator precedence issue or anything like that. For some reason, 
the result of 3 * .7 is not considered to be equal to the literal 2.1. What's 
the deal? As I understand it, floating point operations at compile time do not 
necessarily match those done at runtime, but 2.1 should be plenty exact for
both 
compile time and runtime. It's not like there are a lot of digits in the
number, 
and it prints out 2.1 whether you're dealing with a variable or a literal. 
What's the deal here? Is this a bug? Or am I just totally misunderstanding 
something?

- Jonathan M Davis
Dec 16 2010
parent Don <nospam nospam.com> writes:
Jonathan M Davis wrote:
 Maybe I'm just totally missing something, but this seems really wrong to me. 
 This program:
 
 import std.stdio;
 
 void main()
 {
     writeln(2.1);
     writeln(2.1 == 2.1);
     writeln(3 * .7);
     writeln(2.1 == 3 * .7);
 
     auto a = 2.1;
     auto b = 3 * .7;
     writeln(a);
     writeln(b);
     writeln(a == b);
 }
 
 
 
 prints out
 
 2.1
 true
 2.1
 false
 2.1
 2.1
 true
 
 
 How on earth is 2.1 not equal to 3 * .7?

0.7 is not exactly representable in binary floating point, nor is 2.1. (they are the binary equivalent of a recurring decimal like 0.333333333333...) Adding extra parens doesn't help, so
 it's not an operator precedence issue or anything like that. For some reason, 
 the result of 3 * .7 is not considered to be equal to the literal 2.1. What's 
 the deal? As I understand it, floating point operations at compile time do not 
 necessarily match those done at runtime, but 2.1 should be plenty exact for
both 
 compile time and runtime. It's not like there are a lot of digits in the
number, 
 and it prints out 2.1 whether you're dealing with a variable or a literal. 
 What's the deal here? Is this a bug? Or am I just totally misunderstanding 
 something?

Constant folding is done at real precision. So 3 * .7 is an 80-bit number. But, 'a' has type double. So it's been truncated to 64 bit precision. When something confusing like this happens, I recommend using the "%a" format, since it never lies. writefln("%a %a", a, 3*.7);
Dec 16 2010