www.digitalmars.com         C & C++   DMDScript  

digitalmars.D - approxEqual() has fooled me for a long time...

reply "Lars T. Kyllingstad" <public kyllingen.NOSPAMnet> writes:
(This message was originally meant for the Phobos mailing list, but for 
some reason I am currently unable to send messages to it*.  Anyway, it's 
probably worth making others aware of this as well.)

In my code, and in unittests in particular, I use std.math.approxEqual()
a lot to check the results of various computations.  If I expect my
result to be correct to within ten significant digits, say, I'd write

  assert (approxEqual(result, expected, 1e-10));

Since results often span several orders of magnitude, I usually don't
care about the absolute error, so I just leave it unspecified.  So far,
so good, right?

NO!

I just discovered today that the default value for approxEqual's default
absolute tolerance is 1e-5, and not zero as one would expect.  This
means that the following, quite unexpectedly, succeeds:

  assert (approxEqual(1e-10, 1e-20, 0.1));

This seems completely illogical to me, and I think it should be fixed
ASAP.  Any objections?


Changing it to zero turned up fifteen failing unittests in SciD. :(

-Lars


* Regarding the mailing list problem, Thunderbird is giving me the 
following message:

  RCPT TO <phobos puremagic.com> failed:
  <phobos puremagic.com>: Recipient address rejected:
  User unknown in relay recipient table

Are anyone else on the lists seeing this, or is the problem with my mail 
server?
Oct 20 2010
next sibling parent "Lars T. Kyllingstad" <public kyllingen.NOSPAMnet> writes:
On Wed, 20 Oct 2010 10:32:06 +0000, Lars T. Kyllingstad wrote:

 [...]

And yes, I know: "Lars, RTFM!" :) In this case, however, it never even occurred to me to check. -Lars
Oct 20 2010
prev sibling next sibling parent reply Don <nospam nospam.com> writes:
Lars T. Kyllingstad wrote:
 (This message was originally meant for the Phobos mailing list, but for 
 some reason I am currently unable to send messages to it*.  Anyway, it's 
 probably worth making others aware of this as well.)
 
 In my code, and in unittests in particular, I use std.math.approxEqual()
 a lot to check the results of various computations.  If I expect my
 result to be correct to within ten significant digits, say, I'd write
 
   assert (approxEqual(result, expected, 1e-10));
 
 Since results often span several orders of magnitude, I usually don't
 care about the absolute error, so I just leave it unspecified.  So far,
 so good, right?
 
 NO!
 
 I just discovered today that the default value for approxEqual's default
 absolute tolerance is 1e-5, and not zero as one would expect.  This
 means that the following, quite unexpectedly, succeeds:
 
   assert (approxEqual(1e-10, 1e-20, 0.1));
 
 This seems completely illogical to me, and I think it should be fixed
 ASAP.  Any objections?

I'm personally pretty upset about the existence of that function at all. My very first contribution to D was a function for floating point approximate equality, which I called approxEqual. It gives equality in terms of number of bits. It gives correct results in all the tricky special cases. Unlike a naive relative equality test involving divisions, it doesn't fail for values near zero. (I _think_ that's the reason why people think you need an absolute equality test as well). And it's fast. No divisions, no poorly predictable branches. Unfortunately, somebody on the ng insisted that it should be called feqrel(). Stupidly, I listened. And now nobody uses my masterpiece because it has a totally sucky name.
Oct 20 2010
next sibling parent bearophile <bearophileHUGS lycos.com> writes:
Don:

 Unfortunately, somebody on the ng insisted that it should be called 
 feqrel(). Stupidly,  I listened. And now nobody uses my masterpiece 
 because it has a totally sucky name.

Phobos2 is changing still, even some language details are changing still, so there is time to fix the situation. For example approxEqual() may just call feqrel(), the third argument of approxEqual() may be changed in the (integer) minimal number of bits that make it return true. Bye, bearophile
Oct 20 2010
prev sibling next sibling parent reply "Lars T. Kyllingstad" <public kyllingen.NOSPAMnet> writes:
On Wed, 20 Oct 2010 12:57:11 +0200, Don wrote:

 Lars T. Kyllingstad wrote:
 (This message was originally meant for the Phobos mailing list, but for
 some reason I am currently unable to send messages to it*.  Anyway,
 it's probably worth making others aware of this as well.)
 
 In my code, and in unittests in particular, I use
 std.math.approxEqual() a lot to check the results of various
 computations.  If I expect my result to be correct to within ten
 significant digits, say, I'd write
 
   assert (approxEqual(result, expected, 1e-10));
 
 Since results often span several orders of magnitude, I usually don't
 care about the absolute error, so I just leave it unspecified.  So far,
 so good, right?
 
 NO!
 
 I just discovered today that the default value for approxEqual's
 default absolute tolerance is 1e-5, and not zero as one would expect. 
 This means that the following, quite unexpectedly, succeeds:
 
   assert (approxEqual(1e-10, 1e-20, 0.1));
 
 This seems completely illogical to me, and I think it should be fixed
 ASAP.  Any objections?

I'm personally pretty upset about the existence of that function at all. My very first contribution to D was a function for floating point approximate equality, which I called approxEqual. It gives equality in terms of number of bits. It gives correct results in all the tricky special cases. Unlike a naive relative equality test involving divisions, it doesn't fail for values near zero. (I _think_ that's the reason why people think you need an absolute equality test as well). And it's fast. No divisions, no poorly predictable branches. Unfortunately, somebody on the ng insisted that it should be called feqrel(). Stupidly, I listened. And now nobody uses my masterpiece because it has a totally sucky name.

That *is* a sucky name. Well, now that I'm aware of it, I'll be sure to check it out. :) However, I, like most people, am a lot more used to thinking in terms of digits than bits. If I need my results to be correct to within 10 significant digits, say, how (if possible) would I use feqrel() to ensure that? -Lars
Oct 20 2010
parent Walter Bright <newshound2 digitalmars.com> writes:
Lars T. Kyllingstad wrote:
 However, I, like most people, am a lot more used to thinking in terms of 
 digits than bits.  If I need my results to be correct to within 10 
 significant digits, say, how (if possible) would I use feqrel() to ensure 
 that?

(# of bits) = (# of decimal digits) * log2(10)
Oct 20 2010
prev sibling next sibling parent Fawzi Mohamed <fawzi gmx.ch> writes:
On 20-ott-10, at 13:18, Lars T. Kyllingstad wrote:

 On Wed, 20 Oct 2010 12:57:11 +0200, Don wrote:

 Lars T. Kyllingstad wrote:
 (This message was originally meant for the Phobos mailing list,  
 but for
 some reason I am currently unable to send messages to it*.  Anyway,
 it's probably worth making others aware of this as well.)

 In my code, and in unittests in particular, I use
 std.math.approxEqual() a lot to check the results of various
 computations.  If I expect my result to be correct to within ten
 significant digits, say, I'd write

  assert (approxEqual(result, expected, 1e-10));

 Since results often span several orders of magnitude, I usually  
 don't
 care about the absolute error, so I just leave it unspecified.  So  
 far,
 so good, right?

 NO!

 I just discovered today that the default value for approxEqual's
 default absolute tolerance is 1e-5, and not zero as one would  
 expect.
 This means that the following, quite unexpectedly, succeeds:

  assert (approxEqual(1e-10, 1e-20, 0.1));

 This seems completely illogical to me, and I think it should be  
 fixed
 ASAP.  Any objections?

I'm personally pretty upset about the existence of that function at all. My very first contribution to D was a function for floating point approximate equality, which I called approxEqual. It gives equality in terms of number of bits. It gives correct results in all the tricky special cases. Unlike a naive relative equality test involving divisions, it doesn't fail for values near zero. (I _think_ that's the reason why people think you need an absolute equality test as well). And it's fast. No divisions, no poorly predictable branches. Unfortunately, somebody on the ng insisted that it should be called feqrel(). Stupidly, I listened. And now nobody uses my masterpiece because it has a totally sucky name.


I use it, but I think that having *also* a function with non zero absolute error is useful. With more complex operations you sum, and if the expected result is 0, then feqrel will consider *any* non zero number as completely wrong. For example testing matrix multiplication you cannot use feqrel alone. Still feqrel is a very useful primitive that I do use (thanks Don).
 That *is* a sucky name.  Well, now that I'm aware of it, I'll be  
 sure to
 check it out. :)

 However, I, like most people, am a lot more used to thinking in  
 terms of
 digits than bits.  If I need my results to be correct to within 10
 significant digits, say, how (if possible) would I use feqrel() to  
 ensure
 that?

Oct 20 2010
prev sibling next sibling parent "Lars T. Kyllingstad" <public kyllingen.NOSPAMnet> writes:
On Wed, 20 Oct 2010 13:33:49 +0200, Fawzi Mohamed wrote:

 On 20-ott-10, at 13:18, Lars T. Kyllingstad wrote:
 
 On Wed, 20 Oct 2010 12:57:11 +0200, Don wrote:

 Lars T. Kyllingstad wrote:
 (This message was originally meant for the Phobos mailing list, but
 for
 some reason I am currently unable to send messages to it*.  Anyway,
 it's probably worth making others aware of this as well.)

 In my code, and in unittests in particular, I use
 std.math.approxEqual() a lot to check the results of various
 computations.  If I expect my result to be correct to within ten
 significant digits, say, I'd write

  assert (approxEqual(result, expected, 1e-10));

 Since results often span several orders of magnitude, I usually don't
 care about the absolute error, so I just leave it unspecified.  So
 far,
 so good, right?

 NO!

 I just discovered today that the default value for approxEqual's
 default absolute tolerance is 1e-5, and not zero as one would expect.
 This means that the following, quite unexpectedly, succeeds:

  assert (approxEqual(1e-10, 1e-20, 0.1));

 This seems completely illogical to me, and I think it should be fixed
 ASAP.  Any objections?

I'm personally pretty upset about the existence of that function at all. My very first contribution to D was a function for floating point approximate equality, which I called approxEqual. It gives equality in terms of number of bits. It gives correct results in all the tricky special cases. Unlike a naive relative equality test involving divisions, it doesn't fail for values near zero. (I _think_ that's the reason why people think you need an absolute equality test as well). And it's fast. No divisions, no poorly predictable branches. Unfortunately, somebody on the ng insisted that it should be called feqrel(). Stupidly, I listened. And now nobody uses my masterpiece because it has a totally sucky name.


I use it, but I think that having *also* a function with non zero absolute error is useful. With more complex operations you sum, and if the expected result is 0, then feqrel will consider *any* non zero number as completely wrong. For example testing matrix multiplication you cannot use feqrel alone. Still feqrel is a very useful primitive that I do use (thanks Don).
 That *is* a sucky name.  Well, now that I'm aware of it, I'll be sure
 to
 check it out. :)

 However, I, like most people, am a lot more used to thinking in terms
 of
 digits than bits.  If I need my results to be correct to within 10
 significant digits, say, how (if possible) would I use feqrel() to
 ensure
 that?


...which would be the solution of 2^bits = 10^digits, I guess. Man, I've got to sit down and learn some more about FP numbers one day. Thanks! -Lars
Oct 20 2010
prev sibling next sibling parent so <so so.do> writes:
It is a long read and on the last page it says "Use fixed point or  
interval" :P

On Wed, 20 Oct 2010 14:59:54 +0300, Lars T. Kyllingstad  
<public kyllingen.nospamnet> wrote:

 On Wed, 20 Oct 2010 13:33:49 +0200, Fawzi Mohamed wrote:

 On 20-ott-10, at 13:18, Lars T. Kyllingstad wrote:

 On Wed, 20 Oct 2010 12:57:11 +0200, Don wrote:

 Lars T. Kyllingstad wrote:
 (This message was originally meant for the Phobos mailing list, but
 for
 some reason I am currently unable to send messages to it*.  Anyway,
 it's probably worth making others aware of this as well.)

 In my code, and in unittests in particular, I use
 std.math.approxEqual() a lot to check the results of various
 computations.  If I expect my result to be correct to within ten
 significant digits, say, I'd write

  assert (approxEqual(result, expected, 1e-10));

 Since results often span several orders of magnitude, I usually don't
 care about the absolute error, so I just leave it unspecified.  So
 far,
 so good, right?

 NO!

 I just discovered today that the default value for approxEqual's
 default absolute tolerance is 1e-5, and not zero as one would expect.
 This means that the following, quite unexpectedly, succeeds:

  assert (approxEqual(1e-10, 1e-20, 0.1));

 This seems completely illogical to me, and I think it should be fixed
 ASAP.  Any objections?

I'm personally pretty upset about the existence of that function at all. My very first contribution to D was a function for floating point approximate equality, which I called approxEqual. It gives equality in terms of number of bits. It gives correct results in all the tricky special cases. Unlike a naive relative equality test involving divisions, it doesn't fail for values near zero. (I _think_ that's the reason why people think you need an absolute equality test as well). And it's fast. No divisions, no poorly predictable branches. Unfortunately, somebody on the ng insisted that it should be called feqrel(). Stupidly, I listened. And now nobody uses my masterpiece because it has a totally sucky name.


I use it, but I think that having *also* a function with non zero absolute error is useful. With more complex operations you sum, and if the expected result is 0, then feqrel will consider *any* non zero number as completely wrong. For example testing matrix multiplication you cannot use feqrel alone. Still feqrel is a very useful primitive that I do use (thanks Don).
 That *is* a sucky name.  Well, now that I'm aware of it, I'll be sure
 to
 check it out. :)

 However, I, like most people, am a lot more used to thinking in terms
 of
 digits than bits.  If I need my results to be correct to within 10
 significant digits, say, how (if possible) would I use feqrel() to
 ensure
 that?


...which would be the solution of 2^bits = 10^digits, I guess. Man, I've got to sit down and learn some more about FP numbers one day. Thanks! -Lars

-- Using Opera's revolutionary e-mail client: http://www.opera.com/mail/
Oct 20 2010
prev sibling next sibling parent Austin Hastings <ah08010-d yahoo.com> writes:
On 10/20/2010 6:57 AM, Don wrote:
 I'm personally pretty upset about the existence of that function at all.
 My very first contribution to D was a function for floating point
 approximate equality, which I called approxEqual.
 It gives equality in terms of number of bits. It gives correct results
 in all the tricky special cases. Unlike a naive relative equality test
 involving divisions, it doesn't fail for values near zero. (I _think_
 that's the reason why people think you need an absolute equality test as
 well).
 And it's fast. No divisions, no poorly predictable branches.

 Unfortunately, somebody on the ng insisted that it should be called
 feqrel(). Stupidly, I listened. And now nobody uses my masterpiece
 because it has a totally sucky name.

So here I am, needing to get some test cases passing with floating comparison, and how fortuitous that you mentioned writing this already. Naturally, I tried it, with floats as my default type: d:\Devel\D\dmd2\windows\bin\..\..\src\phobos\std\math.d(3286): Error: function std.math.feqrel!(const(float)).feqrel has no return statement, but is expected to return a value of type int Is there a patch? =Austin
Oct 20 2010
prev sibling next sibling parent Fawzi Mohamed <fawzi gmx.ch> writes:
On 20-ott-10, at 13:59, Lars T. Kyllingstad wrote:

 On Wed, 20 Oct 2010 13:33:49 +0200, Fawzi Mohamed wrote:

 On 20-ott-10, at 13:18, Lars T. Kyllingstad wrote:

 [...]
 However, I, like most people, am a lot more used to thinking in  
 terms
 of
 digits than bits.  If I need my results to be correct to within 10
 significant digits, say, how (if possible) would I use feqrel() to
 ensure
 that?


...which would be the solution of 2^bits = 10^digits, I guess. Man, I've got to sit down and learn some more about FP numbers one day.

yes floating point use base 2 numbers: matissa + exponent both base 2. feqrel gives you how many bits (i.e. base 2 digits) are equal in the two numbers. 2^33=8589934592 which has 10 digits. 2^34 has already 11 digits, so having more than 33 binary digits in common means having more than 10 base 10 digits in common.
 Thanks!

you are welcome Fawzi
 -Lars

Oct 20 2010
prev sibling next sibling parent Fawzi Mohamed <fawzi gmx.ch> writes:
On 20-ott-10, at 13:59, Lars T. Kyllingstad wrote:

 On Wed, 20 Oct 2010 13:33:49 +0200, Fawzi Mohamed wrote:

 On 20-ott-10, at 13:18, Lars T. Kyllingstad wrote:

 [...]
 However, I, like most people, am a lot more used to thinking in  
 terms
 of
 digits than bits.  If I need my results to be correct to within 10
 significant digits, say, how (if possible) would I use feqrel() to
 ensure
 that?


...which would be the solution of 2^bits = 10^digits, I guess. Man, I've got to sit down and learn some more about FP numbers one day.

yes floating point use base 2 numbers: matissa + exponent both base 2. feqrel gives you how many bits (i.e. base 2 digits) are equal in the two numbers. 2^33=8589934592 which has 10 digits. 2^34 has already 11 digits, so having more than 33 binary digits in common means having more than 10 base 10 digits in common.
 Thanks!

you are welcome Fawzi
 -Lars

Oct 20 2010
prev sibling next sibling parent "Craig Black" <cblack ara.com> writes:
"Don" <nospam nospam.com> wrote in message 
news:i9mhvd$2l5f$1 digitalmars.com...
 Lars T. Kyllingstad wrote:
 (This message was originally meant for the Phobos mailing list, but for 
 some reason I am currently unable to send messages to it*.  Anyway, it's 
 probably worth making others aware of this as well.)

 In my code, and in unittests in particular, I use std.math.approxEqual()
 a lot to check the results of various computations.  If I expect my
 result to be correct to within ten significant digits, say, I'd write

   assert (approxEqual(result, expected, 1e-10));

 Since results often span several orders of magnitude, I usually don't
 care about the absolute error, so I just leave it unspecified.  So far,
 so good, right?

 NO!

 I just discovered today that the default value for approxEqual's default
 absolute tolerance is 1e-5, and not zero as one would expect.  This
 means that the following, quite unexpectedly, succeeds:

   assert (approxEqual(1e-10, 1e-20, 0.1));

 This seems completely illogical to me, and I think it should be fixed
 ASAP.  Any objections?

I'm personally pretty upset about the existence of that function at all. My very first contribution to D was a function for floating point approximate equality, which I called approxEqual. It gives equality in terms of number of bits. It gives correct results in all the tricky special cases. Unlike a naive relative equality test involving divisions, it doesn't fail for values near zero. (I _think_ that's the reason why people think you need an absolute equality test as well). And it's fast. No divisions, no poorly predictable branches. Unfortunately, somebody on the ng insisted that it should be called feqrel(). Stupidly, I listened. And now nobody uses my masterpiece because it has a totally sucky name.

Where can I find this function that you wrote? I would be interested in using it. -Craig
Oct 20 2010
prev sibling next sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 10/20/10 5:57 CDT, Don wrote:
 Lars T. Kyllingstad wrote:
 (This message was originally meant for the Phobos mailing list, but
 for some reason I am currently unable to send messages to it*. Anyway,
 it's probably worth making others aware of this as well.)

 In my code, and in unittests in particular, I use std.math.approxEqual()
 a lot to check the results of various computations. If I expect my
 result to be correct to within ten significant digits, say, I'd write

 assert (approxEqual(result, expected, 1e-10));

 Since results often span several orders of magnitude, I usually don't
 care about the absolute error, so I just leave it unspecified. So far,
 so good, right?

 NO!

 I just discovered today that the default value for approxEqual's default
 absolute tolerance is 1e-5, and not zero as one would expect. This
 means that the following, quite unexpectedly, succeeds:

 assert (approxEqual(1e-10, 1e-20, 0.1));

 This seems completely illogical to me, and I think it should be fixed
 ASAP. Any objections?

I'm personally pretty upset about the existence of that function at all. My very first contribution to D was a function for floating point approximate equality, which I called approxEqual. It gives equality in terms of number of bits. It gives correct results in all the tricky special cases. Unlike a naive relative equality test involving divisions, it doesn't fail for values near zero. (I _think_ that's the reason why people think you need an absolute equality test as well). And it's fast. No divisions, no poorly predictable branches. Unfortunately, somebody on the ng insisted that it should be called feqrel(). Stupidly, I listened. And now nobody uses my masterpiece because it has a totally sucky name.

Hi Don, Any chance you could please integrate feqrel within approxEqual? It's not exactly the code I'd be most proud of. As a side note, I keep on thinking of syntactic cutesies related to approxEqual. I think doubles should be compared for equality very rarely, so approximate comparisons should be easy to read and write. For example: double a, b; ... if (a == within(1e-5) == b) { ... } The condition is true when a is within 1e-5 from b. One issue is that "within" conveys absolute distance, when most of the time you want relative. Using e.g. withinPercent is clearer but not cute anymore. Any ideas? The feqrel and approxEqual story does seem to suggest that names (and by extension syntax) are important. Andrei
Oct 20 2010
next sibling parent dennis luehring <dl.soluz gmx.net> writes:
 double a, b;
 ...
 if (a == within(1e-5) == b) { ... }

i like that idea - should help alot how should that thing be called - type-dependent-operator-behavior? is that something like an extension operator?
Oct 20 2010
prev sibling next sibling parent "Aelxx" <aelxx yandex.ru> writes:
"Andrei Alexandrescu" wrote:
 if (a == within(1e-5) == b) { ... }

For me it looks strange. I think something like approxEqual is readable more clear here.
Oct 20 2010
prev sibling parent Walter Bright <newshound2 digitalmars.com> writes:
Andrei Alexandrescu wrote:
 Any ideas? The feqrel and approxEqual story does seem to suggest that 
 names (and by extension syntax) are important.

When doing real engineering calculations, we think in terms of "significant digits" of accuracy, not accurate to .001. The within(1e-5) is seductively wrong. Something like matchBits(a,b,nbits) and matchDigits(a,b,ndigits) would be better.
Oct 20 2010
prev sibling next sibling parent so <so so.do> writes:
It is in phobos/std/math.d, where approxEqual is.

On Wed, 20 Oct 2010 18:22:23 +0300, Craig Black <cblack ara.com> wrote:

 "Don" <nospam nospam.com> wrote in message  
 news:i9mhvd$2l5f$1 digitalmars.com...
 Lars T. Kyllingstad wrote:
 (This message was originally meant for the Phobos mailing list, but  
 for some reason I am currently unable to send messages to it*.   
 Anyway, it's probably worth making others aware of this as well.)

 In my code, and in unittests in particular, I use  
 std.math.approxEqual()
 a lot to check the results of various computations.  If I expect my
 result to be correct to within ten significant digits, say, I'd write

   assert (approxEqual(result, expected, 1e-10));

 Since results often span several orders of magnitude, I usually don't
 care about the absolute error, so I just leave it unspecified.  So far,
 so good, right?

 NO!

 I just discovered today that the default value for approxEqual's  
 default
 absolute tolerance is 1e-5, and not zero as one would expect.  This
 means that the following, quite unexpectedly, succeeds:

   assert (approxEqual(1e-10, 1e-20, 0.1));

 This seems completely illogical to me, and I think it should be fixed
 ASAP.  Any objections?

I'm personally pretty upset about the existence of that function at all. My very first contribution to D was a function for floating point approximate equality, which I called approxEqual. It gives equality in terms of number of bits. It gives correct results in all the tricky special cases. Unlike a naive relative equality test involving divisions, it doesn't fail for values near zero. (I _think_ that's the reason why people think you need an absolute equality test as well). And it's fast. No divisions, no poorly predictable branches. Unfortunately, somebody on the ng insisted that it should be called feqrel(). Stupidly, I listened. And now nobody uses my masterpiece because it has a totally sucky name.

Where can I find this function that you wrote? I would be interested in using it. -Craig

-- Using Opera's revolutionary e-mail client: http://www.opera.com/mail/
Oct 20 2010
prev sibling next sibling parent reply "Aelxx" <aelxx yandex.ru> writes:
"Don" <nospam nospam.com> пов?домив/пов?домила в 
новинах:i9mhvd$2l5f$1 digitalmars.com...
 Lars T. Kyllingstad wrote:
 in all the tricky special cases. Unlike a naive relative equality test 
 involving divisions, it doesn't fail for values near zero. (I _think_ 
 that's the reason why people think you need an absolute equality test as 
 well).
 And it's fast. No divisions, no poorly predictable branches.

Just wonder. I always used relative equality in form: fabs(a-b) > eps * fabs(a) is therew something wrong in that, except maybe zero a?
Oct 20 2010
parent "Aelxx" <aelxx yandex.ru> writes:
 fabs(a-b) > eps *  fabs(a)

Oct 20 2010
prev sibling next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
Don wrote:
 I'm personally pretty upset about the existence of that function at all.
 My very first contribution to D was a function for floating point 
 approximate equality, which I called approxEqual.
 It gives equality in terms of number of bits. It gives correct results 
 in all the tricky special cases. Unlike a naive relative equality test 
 involving divisions, it doesn't fail for values near zero. (I _think_ 
 that's the reason why people think you need an absolute equality test as 
 well).
 And it's fast. No divisions, no poorly predictable branches.

I totally agree that a precision based on the number of bits, not the magnitude, is the right approach.
 Unfortunately, somebody on the ng insisted that it should be called 
 feqrel(). Stupidly,  I listened. And now nobody uses my masterpiece 
 because it has a totally sucky name.

Names matter!
Oct 20 2010
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 10/20/10 13:42 CDT, Walter Bright wrote:
 Don wrote:
 I'm personally pretty upset about the existence of that function at all.
 My very first contribution to D was a function for floating point
 approximate equality, which I called approxEqual.
 It gives equality in terms of number of bits. It gives correct results
 in all the tricky special cases. Unlike a naive relative equality test
 involving divisions, it doesn't fail for values near zero. (I _think_
 that's the reason why people think you need an absolute equality test
 as well).
 And it's fast. No divisions, no poorly predictable branches.

I totally agree that a precision based on the number of bits, not the magnitude, is the right approach.

I wonder, could that be also generalized for zero? I.e., if a number is zero except for k bits in the mantissa. Andrei
Oct 20 2010
parent reply Walter Bright <newshound2 digitalmars.com> writes:
Andrei Alexandrescu wrote:
 On 10/20/10 13:42 CDT, Walter Bright wrote:
 Don wrote:
 I'm personally pretty upset about the existence of that function at all.
 My very first contribution to D was a function for floating point
 approximate equality, which I called approxEqual.
 It gives equality in terms of number of bits. It gives correct results
 in all the tricky special cases. Unlike a naive relative equality test
 involving divisions, it doesn't fail for values near zero. (I _think_
 that's the reason why people think you need an absolute equality test
 as well).
 And it's fast. No divisions, no poorly predictable branches.

I totally agree that a precision based on the number of bits, not the magnitude, is the right approach.

I wonder, could that be also generalized for zero? I.e., if a number is zero except for k bits in the mantissa.

Zero is a special case I'm not sure how to deal with.
Oct 20 2010
parent reply Don <nospam nospam.com> writes:
Walter Bright wrote:
 Andrei Alexandrescu wrote:
 On 10/20/10 13:42 CDT, Walter Bright wrote:
 Don wrote:
 I'm personally pretty upset about the existence of that function at 
 all.
 My very first contribution to D was a function for floating point
 approximate equality, which I called approxEqual.
 It gives equality in terms of number of bits. It gives correct results
 in all the tricky special cases. Unlike a naive relative equality test
 involving divisions, it doesn't fail for values near zero. (I _think_
 that's the reason why people think you need an absolute equality test
 as well).
 And it's fast. No divisions, no poorly predictable branches.

I totally agree that a precision based on the number of bits, not the magnitude, is the right approach.

I wonder, could that be also generalized for zero? I.e., if a number is zero except for k bits in the mantissa.

Zero is a special case I'm not sure how to deal with.

It does generalize to zero. Denormals have the first k bits in the mantissa set to zero. feqrel automatically treats them as 'close to zero'. It just falls out of the maths. BTW if the processor has a "flush denormals to zero" mode, denormals will compare exactly equal to zero.
Oct 20 2010
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 10/20/10 16:33 CDT, Don wrote:
 Walter Bright wrote:
 Andrei Alexandrescu wrote:
 On 10/20/10 13:42 CDT, Walter Bright wrote:
 Don wrote:
 I'm personally pretty upset about the existence of that function at
 all.
 My very first contribution to D was a function for floating point
 approximate equality, which I called approxEqual.
 It gives equality in terms of number of bits. It gives correct results
 in all the tricky special cases. Unlike a naive relative equality test
 involving divisions, it doesn't fail for values near zero. (I _think_
 that's the reason why people think you need an absolute equality test
 as well).
 And it's fast. No divisions, no poorly predictable branches.

I totally agree that a precision based on the number of bits, not the magnitude, is the right approach.

I wonder, could that be also generalized for zero? I.e., if a number is zero except for k bits in the mantissa.

Zero is a special case I'm not sure how to deal with.

It does generalize to zero. Denormals have the first k bits in the mantissa set to zero. feqrel automatically treats them as 'close to zero'. It just falls out of the maths. BTW if the processor has a "flush denormals to zero" mode, denormals will compare exactly equal to zero.

So here's a plan of attack: 1. Keep feqrel. Clearly it's a useful primitive. 2. Find a more intuitive interface for feqrel, i.e. using decimal digits for precision etc. The definition in std.math: "the number of mantissa bits which are equal in x and y" loses 90% of the readership at the word "mantissa". You want something intuitive that people can immediately picture. 3. Define a good name for that 4. Decide on keeping or deprecating approxEqual depending on the adoption of that new function. Andrei
Oct 20 2010
next sibling parent reply bearophile <bearophileHUGS lycos.com> writes:
Andrei Alexandrescu:

 2. Find a more intuitive interface for feqrel, i.e. using decimal digits 
 for precision etc. The definition in std.math: "the number of mantissa 
 bits which are equal in x and y" loses 90% of the readership at the word 
 "mantissa". You want something intuitive that people can immediately 
 picture.

If you assume that a reader of those docs may know what the FP mantissa is, then the solution is not to remove the word "mantissa" from there, but to add a link to Wikipedia page, or/and to just show an image like: http://en.wikipedia.org/wiki/File:IEEE_754_Double_Floating_Point_Format.svg If you want to use a function about floating point approximations you need to know how a FP is represented. Bye, bearophile
Oct 21 2010
next sibling parent bearophile <bearophileHUGS lycos.com> writes:
 If you assume that a reader of those docs may know what the FP mantissa is, ...

Sorry, I meant: If you assume that a reader of those docs may not know what the FP mantissa is, ...
Oct 21 2010
prev sibling parent Walter Bright <newshound2 digitalmars.com> writes:
bearophile wrote:
 Andrei Alexandrescu:
 
 2. Find a more intuitive interface for feqrel, i.e. using decimal digits 
 for precision etc. The definition in std.math: "the number of mantissa bits
 which are equal in x and y" loses 90% of the readership at the word 
 "mantissa". You want something intuitive that people can immediately 
 picture.

If you assume that a reader of those docs may know what the FP mantissa is, then the solution is not to remove the word "mantissa" from there, but to add a link to Wikipedia page, or/and to just show an image like: http://en.wikipedia.org/wiki/File:IEEE_754_Double_Floating_Point_Format.svg If you want to use a function about floating point approximations you need to know how a FP is represented.

The image doesn't use the word "mantissa" ! Though I do agree that the docs, where appropriate, should have links to explanatory definitions. There are some in the Phobos docs already.
Oct 22 2010
prev sibling parent Walter Bright <newshound2 digitalmars.com> writes:
Robert Jacques wrote:
 Since D has the "is" operator, does it make sense to actually 'fix' "==" 
 to be fuzzy?

I don't think so, because nobody will ever agree on what "fuzzy" means. Arbitrarily setting a meaning for it will perpetuate the notion that floating point operations have seemingly random results.
Oct 21 2010
prev sibling next sibling parent "Robert Jacques" <sandford jhu.edu> writes:
On Thu, 21 Oct 2010 00:19:11 -0400, Andrei Alexandrescu  
<SeeWebsiteForEmail erdani.org> wrote:

 On 10/20/10 16:33 CDT, Don wrote:
 Walter Bright wrote:
 Andrei Alexandrescu wrote:
 On 10/20/10 13:42 CDT, Walter Bright wrote:
 Don wrote:
 I'm personally pretty upset about the existence of that function at
 all.
 My very first contribution to D was a function for floating point
 approximate equality, which I called approxEqual.
 It gives equality in terms of number of bits. It gives correct  
 results
 in all the tricky special cases. Unlike a naive relative equality  
 test
 involving divisions, it doesn't fail for values near zero. (I  
 _think_
 that's the reason why people think you need an absolute equality  
 test
 as well).
 And it's fast. No divisions, no poorly predictable branches.

I totally agree that a precision based on the number of bits, not the magnitude, is the right approach.

I wonder, could that be also generalized for zero? I.e., if a number is zero except for k bits in the mantissa.

Zero is a special case I'm not sure how to deal with.

It does generalize to zero. Denormals have the first k bits in the mantissa set to zero. feqrel automatically treats them as 'close to zero'. It just falls out of the maths. BTW if the processor has a "flush denormals to zero" mode, denormals will compare exactly equal to zero.

So here's a plan of attack: 1. Keep feqrel. Clearly it's a useful primitive.

vote++
 2. Find a more intuitive interface for feqrel, i.e. using decimal digits  
 for precision etc. The definition in std.math: "the number of mantissa  
 bits which are equal in x and y" loses 90% of the readership at the word  
 "mantissa". You want something intuitive that people can immediately  
 picture.

 3. Define a good name for that

When I think of floating point precision, I automatically think in ULP (units in last place). It is how the IEEE 754 specification specifies the precision of the basic math operators, in part because a given ULP requirement is applicable to any floating point type. And the ability for ulp(x,y) <= 2 to be meaningful for floats, doubles and reals is great for templates/generic programming. Essentially ulp(x,y) == min(x.mant_dig, y.mant_dig) - feqrel(x,y); On this subject, I remember that immediately after learning about the "==" operator I was instructed to never, ever use it for floating point values unless I knew for a fact one value had to be a copy of another. This of course leads to bad programming habits like: "In floating-point arithmetic, numbers, including many simple fractions, cannot be represented exactly, and it may be necessary to test for equality within a given tolerance. For example, rounding errors may mean that the comparison in a = 1/7 if a*7 = 1 then ... unexpectedly evaluates false. Typically this problem is handled by rewriting the comparison as if abs(a*7 - 1) < tolerance then ..., where tolerance is a suitably tiny number and abs is the absolute value function." - from Wikipedia's Comparison (computer programming) page. Since D has the "is" operator, does it make sense to actually 'fix' "==" to be fuzzy? Or perhaps to make the set of extended floating point comparison operators fuzzy?(i.e. "!<>" <=> ulp(x,y) <= 2) Or just add an approximately equal operator (i.e. "~~" or "=~") (since nobody will type the actual Unicode Б┴┬) Perhaps even a tiered approach ("==" <=> ulp(x,y) <= 1, "~~" <=> ulp(x,y) <= 8).
Oct 20 2010
prev sibling next sibling parent "Lars T. Kyllingstad" <public kyllingen.NOSPAMnet> writes:
On Wed, 20 Oct 2010 23:33:54 +0200, Don wrote:

 Walter Bright wrote:
 Andrei Alexandrescu wrote:
 On 10/20/10 13:42 CDT, Walter Bright wrote:
 Don wrote:
 I'm personally pretty upset about the existence of that function at
 all.
 My very first contribution to D was a function for floating point
 approximate equality, which I called approxEqual. It gives equality
 in terms of number of bits. It gives correct results in all the
 tricky special cases. Unlike a naive relative equality test
 involving divisions, it doesn't fail for values near zero. (I
 _think_ that's the reason why people think you need an absolute
 equality test as well).
 And it's fast. No divisions, no poorly predictable branches.

I totally agree that a precision based on the number of bits, not the magnitude, is the right approach.

I wonder, could that be also generalized for zero? I.e., if a number is zero except for k bits in the mantissa.

Zero is a special case I'm not sure how to deal with.

It does generalize to zero. Denormals have the first k bits in the mantissa set to zero. feqrel automatically treats them as 'close to zero'. It just falls out of the maths. BTW if the processor has a "flush denormals to zero" mode, denormals will compare exactly equal to zero.

writeln(feqrel(0.0, double.min_normal/100)); // -688 ??? -Lars
Oct 20 2010
prev sibling parent "Lars T. Kyllingstad" <public kyllingen.NOSPAMnet> writes:
On Wed, 20 Oct 2010 23:19:11 -0500, Andrei Alexandrescu wrote:

 So here's a plan of attack:
 
 1. Keep feqrel. Clearly it's a useful primitive.
 
 2. Find a more intuitive interface for feqrel, i.e. using decimal digits
 for precision etc. The definition in std.math: "the number of mantissa
 bits which are equal in x and y" loses 90% of the readership at the word
 "mantissa". You want something intuitive that people can immediately
 picture.
 
 3. Define a good name for that
 
 4. Decide on keeping or deprecating approxEqual depending on the
 adoption of that new function.

First attempt, using Walter's name suggestion, matchDigits(): http://www.dsource.org/projects/scid/browser/scid/util.d Too simplistic? Comments are welcome. -Lars
Oct 21 2010
prev sibling next sibling parent dsimcha <dsimcha yahoo.com> writes:
== Quote from Lars T. Kyllingstad (public kyllingen.NOSPAMnet)'s article
 (This message was originally meant for the Phobos mailing list, but for
 some reason I am currently unable to send messages to it*.  Anyway, it's
 probably worth making others aware of this as well.)
 In my code, and in unittests in particular, I use std.math.approxEqual()
 a lot to check the results of various computations.  If I expect my
 result to be correct to within ten significant digits, say, I'd write
   assert (approxEqual(result, expected, 1e-10));
 Since results often span several orders of magnitude, I usually don't
 care about the absolute error, so I just leave it unspecified.  So far,
 so good, right?
 NO!
 I just discovered today that the default value for approxEqual's default
 absolute tolerance is 1e-5, and not zero as one would expect.  This
 means that the following, quite unexpectedly, succeeds:
   assert (approxEqual(1e-10, 1e-20, 0.1));
 This seems completely illogical to me, and I think it should be fixed
 ASAP.  Any objections?
 Changing it to zero turned up fifteen failing unittests in SciD. :(
 -Lars

What about when the values are numbers a tiny bit different than zero, and should be exactly zero except for a small amount of numerical fuzz? I think absolute error needs to be taken into account by default, too.
Oct 20 2010
prev sibling next sibling parent so <so so.do> writes:
Btw it is the right behavior, and logical, maybe a mistype?
In your example it won't even use abs error.

On Wed, 20 Oct 2010 13:32:06 +0300, Lars T. Kyllingstad  
<public kyllingen.nospamnet> wrote:

 (This message was originally meant for the Phobos mailing list, but for
 some reason I am currently unable to send messages to it*.  Anyway, it's
 probably worth making others aware of this as well.)

 In my code, and in unittests in particular, I use std.math.approxEqual()
 a lot to check the results of various computations.  If I expect my
 result to be correct to within ten significant digits, say, I'd write

   assert (approxEqual(result, expected, 1e-10));

 Since results often span several orders of magnitude, I usually don't
 care about the absolute error, so I just leave it unspecified.  So far,
 so good, right?

 NO!

 I just discovered today that the default value for approxEqual's default
 absolute tolerance is 1e-5, and not zero as one would expect.  This
 means that the following, quite unexpectedly, succeeds:

   assert (approxEqual(1e-10, 1e-20, 0.1));

 This seems completely illogical to me, and I think it should be fixed
 ASAP.  Any objections?


 Changing it to zero turned up fifteen failing unittests in SciD. :(

 -Lars


 * Regarding the mailing list problem, Thunderbird is giving me the
 following message:

   RCPT TO <phobos puremagic.com> failed:
   <phobos puremagic.com>: Recipient address rejected:
   User unknown in relay recipient table

 Are anyone else on the lists seeing this, or is the problem with my mail
 server?

-- Using Opera's revolutionary e-mail client: http://www.opera.com/mail/
Oct 20 2010
prev sibling next sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 10/20/10 5:32 CDT, Lars T. Kyllingstad wrote:
 (This message was originally meant for the Phobos mailing list, but for
 some reason I am currently unable to send messages to it*.  Anyway, it's
 probably worth making others aware of this as well.)

 In my code, and in unittests in particular, I use std.math.approxEqual()
 a lot to check the results of various computations.  If I expect my
 result to be correct to within ten significant digits, say, I'd write

    assert (approxEqual(result, expected, 1e-10));

 Since results often span several orders of magnitude, I usually don't
 care about the absolute error, so I just leave it unspecified.  So far,
 so good, right?

 NO!

 I just discovered today that the default value for approxEqual's default
 absolute tolerance is 1e-5, and not zero as one would expect.  This
 means that the following, quite unexpectedly, succeeds:

    assert (approxEqual(1e-10, 1e-20, 0.1));

 This seems completely illogical to me, and I think it should be fixed
 ASAP.  Any objections?

I wonder what would be a sensible default. If the default for absolute error is zero, then you'd have an equally odd behavior for very small numbers (and most notably zero). Essentially nothing would be approximately zero. Andrei
Oct 20 2010
next sibling parent "Lars T. Kyllingstad" <public kyllingen.NOSPAMnet> writes:
On Wed, 20 Oct 2010 10:30:39 -0500, Andrei Alexandrescu wrote:

 On 10/20/10 5:32 CDT, Lars T. Kyllingstad wrote:
 (This message was originally meant for the Phobos mailing list, but for
 some reason I am currently unable to send messages to it*.  Anyway,
 it's probably worth making others aware of this as well.)

 In my code, and in unittests in particular, I use
 std.math.approxEqual() a lot to check the results of various
 computations.  If I expect my result to be correct to within ten
 significant digits, say, I'd write

    assert (approxEqual(result, expected, 1e-10));

 Since results often span several orders of magnitude, I usually don't
 care about the absolute error, so I just leave it unspecified.  So far,
 so good, right?

 NO!

 I just discovered today that the default value for approxEqual's
 default absolute tolerance is 1e-5, and not zero as one would expect. 
 This means that the following, quite unexpectedly, succeeds:

    assert (approxEqual(1e-10, 1e-20, 0.1));

 This seems completely illogical to me, and I think it should be fixed
 ASAP.  Any objections?

I wonder what would be a sensible default. If the default for absolute error is zero, then you'd have an equally odd behavior for very small numbers (and most notably zero). Essentially nothing would be approximately zero. Andrei

Well, seeing as we have now been made aware of Don's feqrel() function, would something like this work? bool approxEqual(T)(T lhs, T rhs, uint significantDigits) { enum r = log(10)/log(2); return feqrel(lhs, rhs) > significantDigits*r; } -Lars
Oct 20 2010
prev sibling next sibling parent "Lars T. Kyllingstad" <public kyllingen.NOSPAMnet> writes:
On Wed, 20 Oct 2010 15:44:48 +0000, Lars T. Kyllingstad wrote:

 On Wed, 20 Oct 2010 10:30:39 -0500, Andrei Alexandrescu wrote:
 
 On 10/20/10 5:32 CDT, Lars T. Kyllingstad wrote:
 (This message was originally meant for the Phobos mailing list, but
 for some reason I am currently unable to send messages to it*. 
 Anyway, it's probably worth making others aware of this as well.)

 In my code, and in unittests in particular, I use
 std.math.approxEqual() a lot to check the results of various
 computations.  If I expect my result to be correct to within ten
 significant digits, say, I'd write

    assert (approxEqual(result, expected, 1e-10));

 Since results often span several orders of magnitude, I usually don't
 care about the absolute error, so I just leave it unspecified.  So
 far, so good, right?

 NO!

 I just discovered today that the default value for approxEqual's
 default absolute tolerance is 1e-5, and not zero as one would expect.
 This means that the following, quite unexpectedly, succeeds:

    assert (approxEqual(1e-10, 1e-20, 0.1));

 This seems completely illogical to me, and I think it should be fixed
 ASAP.  Any objections?

I wonder what would be a sensible default. If the default for absolute error is zero, then you'd have an equally odd behavior for very small numbers (and most notably zero). Essentially nothing would be approximately zero. Andrei

Well, seeing as we have now been made aware of Don's feqrel() function, would something like this work? bool approxEqual(T)(T lhs, T rhs, uint significantDigits) { enum r = log(10)/log(2); return feqrel(lhs, rhs) > significantDigits*r; } -Lars

FWIW, that's how you specify precision in Mathematica -- by the number of digits. -Lars
Oct 20 2010
prev sibling next sibling parent reply Don <nospam nospam.com> writes:
Andrei Alexandrescu wrote:
 On 10/20/10 5:32 CDT, Lars T. Kyllingstad wrote:
 (This message was originally meant for the Phobos mailing list, but for
 some reason I am currently unable to send messages to it*.  Anyway, it's
 probably worth making others aware of this as well.)

 In my code, and in unittests in particular, I use std.math.approxEqual()
 a lot to check the results of various computations.  If I expect my
 result to be correct to within ten significant digits, say, I'd write

    assert (approxEqual(result, expected, 1e-10));

 Since results often span several orders of magnitude, I usually don't
 care about the absolute error, so I just leave it unspecified.  So far,
 so good, right?

 NO!

 I just discovered today that the default value for approxEqual's default
 absolute tolerance is 1e-5, and not zero as one would expect.  This
 means that the following, quite unexpectedly, succeeds:

    assert (approxEqual(1e-10, 1e-20, 0.1));

 This seems completely illogical to me, and I think it should be fixed
 ASAP.  Any objections?

I wonder what would be a sensible default. If the default for absolute error is zero, then you'd have an equally odd behavior for very small numbers (and most notably zero). Essentially nothing would be approximately zero. Andrei

I don't think it's possible to have a sensible default for absolute tolerance, because you never know what scale is important. You can do a default for relative tolerance, because floating point numbers work that way (eg, you can say they're equal if they differ in only the last 4 bits, or if half of the mantissa bits are equal). I would even think that the acceptable relative error is almost always known at compile time, but the absolute error may not be.
Oct 20 2010
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 10/20/10 10:52 CDT, Don wrote:
 I don't think it's possible to have a sensible default for absolute
 tolerance, because you never know what scale is important. You can do a
 default for relative tolerance, because floating point numbers work that
 way (eg, you can say they're equal if they differ in only the last 4
 bits, or if half of the mantissa bits are equal).

 I would even think that the acceptable relative error is almost always
 known at compile time, but the absolute error may not be.

I wonder if it could work to set either number, if zero, to the smallest normalized value. Then proceed with the feqrel algorithm. Would that work? Andrei
Oct 20 2010
next sibling parent reply Don <nospam nospam.com> writes:
Andrei Alexandrescu wrote:
 On 10/20/10 10:52 CDT, Don wrote:
 I don't think it's possible to have a sensible default for absolute
 tolerance, because you never know what scale is important. You can do a
 default for relative tolerance, because floating point numbers work that
 way (eg, you can say they're equal if they differ in only the last 4
 bits, or if half of the mantissa bits are equal).

 I would even think that the acceptable relative error is almost always
 known at compile time, but the absolute error may not be.

I wonder if it could work to set either number, if zero, to the smallest normalized value. Then proceed with the feqrel algorithm. Would that work? Andrei

feqrel actually treats zero fairly. There are exactly as many possible values almost equal to zero, as there are near any other number. So in terms of the floating point number representation, the behaviour is perfect. Thinking out loud here... I think that you use absolute error to deal with the difference between the computer's representation, and the real world. You're almost pretending that they are fixed point numbers. Pretty much any real-world data set has a characteristic magnitude, and anything which is more than (say) 10^^50 times smaller than the average is probably equivalent to zero.
Oct 20 2010
parent Don <nospam nospam.com> writes:
Fawzi Mohamed wrote:
 
 On 20-ott-10, at 20:53, Don wrote:
 
 Andrei Alexandrescu wrote:
 On 10/20/10 10:52 CDT, Don wrote:
 I don't think it's possible to have a sensible default for absolute
 tolerance, because you never know what scale is important. You can do a
 default for relative tolerance, because floating point numbers work 
 that
 way (eg, you can say they're equal if they differ in only the last 4
 bits, or if half of the mantissa bits are equal).

 I would even think that the acceptable relative error is almost always
 known at compile time, but the absolute error may not be.

smallest normalized value. Then proceed with the feqrel algorithm. Would that work? Andrei

feqrel actually treats zero fairly. There are exactly as many possible values almost equal to zero, as there are near any other number. So in terms of the floating point number representation, the behaviour is perfect. Thinking out loud here... I think that you use absolute error to deal with the difference between the computer's representation, and the real world. You're almost pretending that they are fixed point numbers. Pretty much any real-world data set has a characteristic magnitude, and anything which is more than (say) 10^^50 times smaller than the average is probably equivalent to zero.

The thing is two fold, from one thing, yes numbers 10^^50 smaller are not important, but the real problem is another, you will probably add and subtract numbers of magnitude x, on this operation the *absolute* error is x*epsilon. Note that the error is relative to the magnitude of the operands, not of the result, it is really an absolute error.

You have just lost precision. BTW -- I haven't yet worked out if we are disagreeing with each other, or not.
 Now the end result might have a relative error, but also an absolute 
 error whose size depends on the magnitude of the operands.
 If the result is close to 0 the absolute error is likely to dominate, 
 and checking the relative error will fail.

I don't understand what you're saying here. If you encounter catastrophic cancellation, you really have no idea what the correct answer is.
 This is the case for example for matrix multiplication.
 In NArray I wanted to check the linar algebra routines with matrixes of 
 random numbers, feqrel did fail too much for number close to 0.

You mean, total loss of precision is acceptable for numbers close to zero? What it is telling you is correct: your routines have poor numerical behaviour near zero. feqrel is failing when you have an ill-conditioned matrix.
 Obviously the right thing as Walter said is to let the user choose the 
 magnitude of its results.

This is also what I said, in the post you're replying to: it depends on the data.
 In the code I posted I did choose simply 0.5**(mantissa_bits/4) which is 
 smaller than 1 but not horribly so.

 One can easily make that an input parameter (it is the shift parameter 
 in my code)

I don't like the idea of having an absolute error by default. Although it is sometimes appropriate, I don't think it should be viewed as something which should always be included. It can be highly misleading. I guess that was the point of the original post in this thread.
Oct 20 2010
prev sibling parent Walter Bright <newshound2 digitalmars.com> writes:
Andrei Alexandrescu wrote:
 On 10/20/10 10:52 CDT, Don wrote:
 I don't think it's possible to have a sensible default for absolute
 tolerance, because you never know what scale is important. You can do a
 default for relative tolerance, because floating point numbers work that
 way (eg, you can say they're equal if they differ in only the last 4
 bits, or if half of the mantissa bits are equal).

 I would even think that the acceptable relative error is almost always
 known at compile time, but the absolute error may not be.

I wonder if it could work to set either number, if zero, to the smallest normalized value. Then proceed with the feqrel algorithm. Would that work?

I would rather force the user to make a decision about what precision he's looking for, rather than a default.
Oct 20 2010
prev sibling next sibling parent Fawzi Mohamed <fawzi gmx.ch> writes:
On 20-ott-10, at 17:52, Don wrote:

 Andrei Alexandrescu wrote:
 On 10/20/10 5:32 CDT, Lars T. Kyllingstad wrote:
 (This message was originally meant for the Phobos mailing list,  
 but for
 some reason I am currently unable to send messages to it*.   
 Anyway, it's
 probably worth making others aware of this as well.)

 In my code, and in unittests in particular, I use  
 std.math.approxEqual()
 a lot to check the results of various computations.  If I expect my
 result to be correct to within ten significant digits, say, I'd  
 write

   assert (approxEqual(result, expected, 1e-10));

 Since results often span several orders of magnitude, I usually  
 don't
 care about the absolute error, so I just leave it unspecified.  So  
 far,
 so good, right?

 NO!

 I just discovered today that the default value for approxEqual's  
 default
 absolute tolerance is 1e-5, and not zero as one would expect.  This
 means that the following, quite unexpectedly, succeeds:

   assert (approxEqual(1e-10, 1e-20, 0.1));

 This seems completely illogical to me, and I think it should be  
 fixed
 ASAP.  Any objections?

absolute error is zero, then you'd have an equally odd behavior for very small numbers (and most notably zero). Essentially nothing would be approximately zero. Andrei

I don't think it's possible to have a sensible default for absolute tolerance, because you never know what scale is important. You can do a default for relative tolerance, because floating point numbers work that way (eg, you can say they're equal if they differ in only the last 4 bits, or if half of the mantissa bits are equal). I would even think that the acceptable relative error is almost always known at compile time, but the absolute error may not be.

I had success in using (the very empiric) /// feqrel version more forgiving close to 0 /// if you sum values you cannot expect better than T.epsilon absolute error. /// feqrel compares relative error, and close to 0 (where the density of floats is high) it is /// much more stringent. /// To guarantee T.epsilon absolute error one should use shift=1.0, here we are more stingent /// and we use T.mant_dig/4 digits more when close to 0. int feqrel2(T)(T x,T y){ static if(isComplexType!(T)){ return min(feqrel2(x.re,y.re),feqrel2(x.im,y.im)); } else { const T shift=ctfe_powI(0.5,T.mant_dig/4); if (x<0){ return feqrel(x-shift,y-shift); } else { return feqrel(x+shift,y+shift); } } } (from blip.narrray.NArrayBasicOps)
Oct 20 2010
prev sibling next sibling parent Fawzi Mohamed <fawzi gmx.ch> writes:
On 20-ott-10, at 20:53, Don wrote:

 Andrei Alexandrescu wrote:
 On 10/20/10 10:52 CDT, Don wrote:
 I don't think it's possible to have a sensible default for absolute
 tolerance, because you never know what scale is important. You can  
 do a
 default for relative tolerance, because floating point numbers  
 work that
 way (eg, you can say they're equal if they differ in only the last 4
 bits, or if half of the mantissa bits are equal).

 I would even think that the acceptable relative error is almost  
 always
 known at compile time, but the absolute error may not be.

smallest normalized value. Then proceed with the feqrel algorithm. Would that work? Andrei

feqrel actually treats zero fairly. There are exactly as many possible values almost equal to zero, as there are near any other number. So in terms of the floating point number representation, the behaviour is perfect. Thinking out loud here... I think that you use absolute error to deal with the difference between the computer's representation, and the real world. You're almost pretending that they are fixed point numbers. Pretty much any real-world data set has a characteristic magnitude, and anything which is more than (say) 10^^50 times smaller than the average is probably equivalent to zero.

The thing is two fold, from one thing, yes numbers 10^^50 smaller are not important, but the real problem is another, you will probably add and subtract numbers of magnitude x, on this operation the *absolute* error is x*epsilon. Note that the error is relative to the magnitude of the operands, not of the result, it is really an absolute error. Now the end result might have a relative error, but also an absolute error whose size depends on the magnitude of the operands. If the result is close to 0 the absolute error is likely to dominate, and checking the relative error will fail. This is the case for example for matrix multiplication. In NArray I wanted to check the linar algebra routines with matrixes of random numbers, feqrel did fail too much for number close to 0. Obviously the right thing as Walter said is to let the user choose the magnitude of its results. In the code I posted I did choose simply 0.5**(mantissa_bits/4) which is smaller than 1 but not horribly so. One can easily make that an input parameter (it is the shift parameter in my code) Fawzi
Oct 20 2010
prev sibling parent "Lars T. Kyllingstad" <public kyllingen.NOSPAMnet> writes:
On Wed, 20 Oct 2010 23:28:07 +0200, Don wrote:

 [...] 
 
 I don't like the idea of having an absolute error by default. Although
 it is sometimes appropriate, I don't think it should be viewed as
 something which should always be included. It can be highly misleading.
 
 I guess that was the point of the original post in this thread.

That's right, and I also asked the question: Does anyone mind if I change the default value of maxAbsDiff to zero, or at least something way smaller than 1e-5? -Lars
Oct 20 2010
prev sibling parent Fawzi Mohamed <fawzi gmx.ch> writes:
On 20-ott-10, at 23:28, Don wrote:

 Fawzi Mohamed wrote:
 On 20-ott-10, at 20:53, Don wrote:
 Andrei Alexandrescu wrote:
 On 10/20/10 10:52 CDT, Don wrote:
 I don't think it's possible to have a sensible default for  
 absolute
 tolerance, because you never know what scale is important. You  
 can do a
 default for relative tolerance, because floating point numbers  
 work that
 way (eg, you can say they're equal if they differ in only the  
 last 4
 bits, or if half of the mantissa bits are equal).

 I would even think that the acceptable relative error is almost  
 always
 known at compile time, but the absolute error may not be.

smallest normalized value. Then proceed with the feqrel algorithm. Would that work? Andrei

feqrel actually treats zero fairly. There are exactly as many possible values almost equal to zero, as there are near any other number. So in terms of the floating point number representation, the behaviour is perfect. Thinking out loud here... I think that you use absolute error to deal with the difference between the computer's representation, and the real world. You're almost pretending that they are fixed point numbers. Pretty much any real-world data set has a characteristic magnitude, and anything which is more than (say) 10^^50 times smaller than the average is probably equivalent to zero.

are not important, but the real problem is another, you will probably add and subtract numbers of magnitude x, on this operation the *absolute* error is x*epsilon. Note that the error is relative to the magnitude of the operands, not of the result, it is really an absolute error.

You have just lost precision. BTW -- I haven't yet worked out if we are disagreeing with each other, or not.

eheh I think we both know the problems, and it is just the matter of the kind of tests we do more often. feqrel is a very important primitive, and it should be available, that is what should have been used by Lars. I happen to test often things where the result is a postion, an energy difference,... and in those cases I have a magnitude, and so implicitly also an absolute error. In any case all this discussion was useful, as it made me improve my code by making the magnitude an explicit argument. Fawzi
Oct 20 2010