www.digitalmars.com         C & C++   DMDScript  

digitalmars.D - [Submission- std.math] asinh, acosh, atanh, cis, productLog.

reply Don Clugston <dac nospam.com.au> writes:
Here are the three least trivial mathematical functions from math2:
cosh, sinh, tanh.
Those functions in math2 had copyright issues, and also had several 
bugs. The implementations here are more extensively tested, faster, and 
are public domain.
I also include cis(x) = cos + i sin, which is a fast operation on
x86 CPUs.

Also, productLog() and productLog_1().
This is the inverse of x * exp(x).
I've used the simple Mathematica names rather than more common (but 
obscure) names, LambertW() and LambertW_1(). This function crops up all 
over the place but usually goes unrecognised because it was unnamed 
until quite recently (despite being first described in the 1700s!)
If interested, look it up in Wikipedia.

There's also an internal function for unit testing, which builds on my
feqrel() function.
This is a "second order" assert in that it allows a kind of 'for all'
operation for testing inverses. This function is not intended to be 
publicly documented, but should be useful for other functions in std.math.

Hopefully this moves us a lot closer to deprecating std.math2.

Note that several of the other trig functions in std.math2 are just 
plain wrong.
acot(), asec() and acosec() return incorrect results.
They should be deleted. If anyone is using them, their code is wrong!

Also, sign(real x) ignores the sign of negative zero,
so that it violates the identity

sign(x)*abs(x) == x   when x=-0.0.

It should never return 0, since zero has a sign in IEEE.
It should be

real sign(real x)
{
   return signbit(x) ? -1.0 : 1.0;
}

and therefore ints should never return 0 either.

int sign(int x)
{
    return x<0 ? -1 : 1;
}

long sign(long x)
{
    return x<0 ? -1 : 1;
}

-Don.

//------------------------------------------------------------------
// (Private) functions for unit tests

/**
  Test the consistency of a real function which has an inverse.
  Returns the worst (minimum) value of feqrel(x, 
inversefunc(forwardfunc(x)))
  for all x in the domain.
  Author: Don Clugston. License: Public Domain
  Params:
         forwardfunc   Function to be tested
         inversefunc   Inverse of forwardfunc
         domain        A sequence of pairs of numbers giving the first 
and last points which are valid for the function.
eg. (-real.infinity, real.infinity) ==> valid everywhere
     (-real.infinity, -real.min, real.min, real.infinity) ==> valid for 
x!=0.
  Returns:
     number of bits for which x and inversefunc(forwardfunc(x)) are 
equal for
     every point x in the domain.
     -1 = at least one point is wrong by a factor of 2 or more.
*/
int consistencyRealInverse(real function (real) forwardfunc,
                     real function (real) inversefunc, real [] domain ...)
{
       assert(domain.length>=2); // must have at least one valid range
       assert((domain.length & 1) == 0); // must have even number of 
endpoints.

       int worsterror=real.mant_dig+1;
       real worstx=domain[0]; // where does the biggest discrepancy occur?

       void testPoint(real x) {
          for (int i=0; i<domain.length; i+=2) {
             if (x>=domain[i] && x<=domain[i+1]) {
				int u=feqrel(x, inversefunc(forwardfunc(x)) );
				if (u<worsterror) { worsterror=u;  worstx=x; }
				return;
		     }
		 }
       }
       // test the edges of the domains
       foreach(real y; domain) testPoint(y);
       real x = 1.01;
	  // first, go from 1 to infinity
	  for (x=1.01; x!=x.infinity; x*=2.83) testPoint(x);
	  // then from 1 to +0
	  for (x=0.98; x!=0; x*=0.401) testPoint(x);
	  // from -1 to -0
	  for (x=-0.93; x!=0; x*=0.403) testPoint(x);
	  // from -1 to -infinity
	  for (x=-1.09; x!=-x.infinity; x*=2.97) testPoint(x);
//	   writefln("Worst error is ", worsterror, " at x= ", worstx);
	  return worsterror;
}

// return true if x is -0.0
bit isNegZero(real x)
{
    return (x==0) && (signbit(x)==1);
}

// return true if x is +0.0
bit isPosZero(real x)
{
    return (x==0) && (signbit(x)==0);
}


//------------------------------------------------------------------

//  cis(theta) = cos(theta) + sin(theta)i.
version (X86)
{
     // On x86, this is almost as fast as sin(x).
     creal cis(real x)
     {
      asm {
          fld x;
          fsincos;
          fxch st(1), st(0);
       }
     }
} else {
     creal cis(real x)
     {
       return cos(x) + sin(x)*1i;
     }
}

unittest {
     assert(cis(0)==1+0i);
     assert(cis(1.3e5)==cos(1.3e5)+sin(1.3e5)*1i);
     creal c = cis(real.nan);
     assert(isnan(c.re) && isnan(c.im));
     c = cis(real.infinity);
     assert(isnan(c.re) && isnan(c.im));
}

//------------------------------------------------------------------
//        Inverse hyperbolic trig functions

/** Inverse hyperbolic sine - the inverse of sinh(x)
    Author: Don Clugston   License: Public Domain.
    Definition:
    asinh(x) =  log( x + sqrt( x*x + 1 )) if x >= +0
    asinh(x) = -log(-x + sqrt( x*x + 1 )) if x <= -0
    Preserves sign of -0.0
    Domain: -infinity..+infinity
    Range: -infinity, -log(real.max)..log(real.max), +infinity
    Special values:
    -inf   -inf
    -0     -0
    +0     +0
    +inf   +inf
*/
real asinh(real x)
{
  	if (fabs(x) > 1/real.epsilon)   // beyond this point, x*x + 1 == x*x
  			return copysign(LN2 + log(fabs(x)), x);
	else {
	   // sqrt(x*x + 1) ==  1 +  x * x / ( 1 + sqrt(x*x + 1) )
        return copysign(log1p(fabs(x) + x*x / (1 + sqrt(x*x + 1)) ), x);
	}
}

unittest {
  assert(isPosZero(asinh(0.0)));
  assert(isNegZero(asinh(-0.0)));
  assert(asinh(real.infinity)==real.infinity);
  assert(asinh(-real.infinity)==-real.infinity);
  assert(isnan(asinh(real.nan)));
  version (X86) {
	 assert( consistencyRealInverse(&asinh, &sinh,-double.max, double.max) 
= double.mant_dig);

assert( consistencyRealInverse(&sinh, &asinh, -log(real.max)*(1+real.epsilon), log(real.max)*(1-real.epsilon) )>= double.mant_dig); } } /** Inverse hyperbolic cosine - the inverse of cosh(x) Author: Don Clugston License: Public Domain. Definition: acosh(x) = log(x + sqrt( x*x -1)) Domain: 1..infinity Range: 1..log(real.max), infinity Special values: <1 nan 1 0 +inf +inf */ real acosh(real x) { if (x > 1/real.epsilon) return LN2 + log(x); else return log(x + sqrt(x*x - 1)); } unittest { assert(isnan(acosh(0.9))); assert(isnan(acosh(real.nan))); assert(acosh(1)==0.0); assert(acosh(real.infinity) == real.infinity); version (X86) { assert( consistencyRealInverse(&acosh, &cosh, 1, double.max) >= double.mant_dig); assert( consistencyRealInverse(&cosh, &acosh, 1, log(real.max)*(1-real.epsilon)) >= real.mant_dig-1); } } /** Inverse hyperbolic tangent - the inverse of tanh(x) Author: Don Clugston License: Public Domain. Definition: atanh(x) = log( (1+x)/(1-x) ) / 2 Domain: -infinity..infinity Range: -1..1 */ real atanh(real x) { // log( (1+x)/(1-x) ) == log ( 1 + (2*x)/(1-x) ) return 0.5 * log1p( 2 * x / (1 - x) ); } unittest { assert(isPosZero(atanh(0.0))); assert(isNegZero(atanh(-0.0))); assert(isnan(atanh(real.nan))); assert(isNegZero(atanh(-real.infinity))); version (X86) { assert( consistencyRealInverse(&atanh, &tanh, -1, 1) >= real.mant_dig-2); assert( consistencyRealInverse(&tanh, &atanh, -1,1) >= real.mant_dig-1); } } //------------------------------------------------------------------ // Lambert's W function, also known as ProductLog /** productLog() is the inverse of x*exp(x) and is also known as Lambert's W-function. productLog(x*exp(x)) == x, where x >= -exp(-1) A second inverse, productLog_1(x), can be defined for real x<=-exp(-1). Author: Don Clugston. Date: 2005-09-01. License: Public Domain. Using an algorithm originally developed by Keith Briggs http://keithbriggs.info */ real productLog(real x) { real w; if (x < 0.5) { if (fabs(x) < x.epsilon/2) return x; // exp(x)==1 for small x. if (x <= -1/E) { if (x== -1/E) return -1; // avoid div by zero return x.nan; // no solution } real p = sqrt( 2.0 * ( E*x + 1.0 ) ); w=-1.0+p*(1.0+p*(-1/3.0 + p*11.0/72.0)); } else { if (isnan(x)) return x.nan; w = log(x); if (x > 3) { w = w-log(w); if (x==x.infinity) return x.infinity; } } real e,t,w2=w; int kount=0; do { // Halley iterations. Never requires more than 4 iterations. w = w2; e = exp(w); t = w*e - x; t /= e*(w+1.0) - 0.5*(w+2.0)*t / (w+1.0); w2 -= t; kount++; if (kount>20) { writefln(x); return x; } } while (fabs(t)>=x.epsilon*(1.0+fabs(w2))); return w2; } /** The inverse of y*exp(y) where y <= -exp(-1) Params: x -1/E <= x <= 0 Returns: y such that y*exp(y)==x and y <= -exp(-1). Author: Don Clugston. Date: 2005-09-01. License: Public Domain. Using an algorithm originally developed by Keith Briggs (http://keithbriggs.info) */ real productLog_1(real x) { real w; if (x<-1/E || x>=0.0 || isnan(x)) return x.nan; // initial approx for iteration... if (x<-1e-6) { // series about -1/e if (x==-1/E) return -1; // avoid div by zero real p=-sqrt(2.0*(E*x+1.0)); w=-1.0+p*(1.0+p*(-1/3.0 + p*11.0/72.0)); } else { // asymptotic near zero real l1=log(-x); real l2=log(-l1); w=l1-l2+l2/l1; } if (x>-real.epsilon) return -x.infinity; real e, t, w2=w; do { // Halley iterations. Never requires more than 10 iterations. w = w2; e = exp(w); t = w*e - x; t /= e*(w+1.0) - 0.5*(w+2.0)*t / (w+1.0); w2 -= t; } while (fabs(t)>=x.epsilon*(1.0+fabs(w2))); return w2; } // x*exp(x). Used only for testing real productExp(real x) { return x*exp(x); } unittest { assert(productLog(-1/E)==-1); assert(isnan(productLog(real.nan))); assert(isNegZero(productLog(-0.0))); assert(isPosZero(productLog(0.0))); assert(productLog(E)==1); version (X86) { assert( consistencyRealInverse(&productLog, &productExp, -1/E, double.max) >= double.mant_dig); assert( consistencyRealInverse(&productExp, &productLog,-1, 11000) >= real.mant_dig-3); } assert(productLog_1(-1/E)==-1); assert(productLog_1(-real.infinity)==0); assert(isnan(productLog_1(real.nan))); version (X86) { assert( consistencyRealInverse(&productLog_1, &productExp, -1/E, -real.epsilon) >= double.mant_dig); assert( consistencyRealInverse(&productExp, &productLog_1, log(real.epsilon), -1) >= double.mant_dig); } }
Sep 12 2005
next sibling parent reply Don Clugston <dac nospam.com.au> writes:
A couple more comments:

* The following functions are defined in std.c.math, but if you use 
them, you'll get a link time error:

acoshl(), asinhl(), atanhl().

They should be removed from std.c.math and the (commented out) 
forwarding functions in std.math should also be removed.

* The list of constants in std.math should include

const ireal I = 1.0i;

for compatibility with C99 and for situations like

cos(PI) + I * sin(PI)

where you want to convert a function result from real to imaginary.

-Don.
Sep 12 2005
next sibling parent reply Sean Kelly <sean f4.ca> writes:
In article <dg41m1$1o5k$1 digitaldaemon.com>, Don Clugston says...
A couple more comments:

* The following functions are defined in std.c.math, but if you use 
them, you'll get a link time error:

acoshl(), asinhl(), atanhl().

With DMD? That's odd. dm/include/math.h has declarations for these functions. Sean
Sep 12 2005
parent reply Don Clugston <dac nospam.com.au> writes:
Sean Kelly wrote:
* The following functions are defined in std.c.math, but if you use 
them, you'll get a link time error:

acoshl(), asinhl(), atanhl().

With DMD? That's odd. dm/include/math.h has declarations for these functions. Sean

Yes, it is odd. Test program: -------------------------------- import std.c.math; int main() { real x = acoshl(1.0); return 0; } ------------------------------- c:\dmd\bin\..\..\dm\bin\link.exe bug,bug.exe,,user32+kernel32,bug.def/noi; OPTLINK (R) for Win32 Release 7.50B1 Copyright (C) Digital Mars 1989 - 2001 All Rights Reserved bug.obj(bug) Error 42: Symbol Undefined _acoshl --- errorlevel 1 -------------------------------------- Why? Well, they are not standard C routines. Not included in VC++, for example. So they probably aren't part of gcc, hence they must be defined for the sake of gdc. They were probably removed from the dmd lib when gdc was born. I suspect that the functions in dmc are copyright and so can't be used in phobos. Just a guess. - Don.
Sep 12 2005
parent Sean Kelly <sean f4.ca> writes:
In article <dg49vr$1vch$1 digitaldaemon.com>, Don Clugston says...
Sean Kelly wrote:
* The following functions are defined in std.c.math, but if you use 
them, you'll get a link time error:

acoshl(), asinhl(), atanhl().

With DMD? That's odd. dm/include/math.h has declarations for these functions. Sean

Yes, it is odd. Test program: -------------------------------- import std.c.math; int main() { real x = acoshl(1.0); return 0; } -------------------------------

What about this: real x = acoshl( cast(real) 1.0 );
Why? Well, they are not standard C routines.

They are standard C routines as of the C99 spec. But DMC is more up to date than most compilers with respect to C99 library conformance. For what it's worth, I have a full implementation of std.c.math here: http://svn.dsource.org/projects/ares/trunk/src/ares/std/c/math.d It's correct for DMD as far as I know, but other compilers may not implement all the declared functions. Sean
Sep 12 2005
prev sibling parent "Walter Bright" <newshound digitalmars.com> writes:
"Don Clugston" <dac nospam.com.au> wrote in message
news:dg41m1$1o5k$1 digitaldaemon.com...
 * The following functions are defined in std.c.math, but if you use
 them, you'll get a link time error:

 acoshl(), asinhl(), atanhl().

That's because they aren't implemented in the C runtime library yet. :-(
 * The list of constants in std.math should include

 const ireal I = 1.0i;

 for compatibility with C99 and for situations like

I disagree with forwarding I from C into D, it's not needed for compatibility.
 cos(PI) + I * sin(PI)

 where you want to convert a function result from real to imaginary.

cos(PI) + sin(PI) *1i;
Sep 12 2005
prev sibling parent reply "Walter Bright" <newshound digitalmars.com> writes:
"Don Clugston" <dac nospam.com.au> wrote in message
news:dg40b2$1mud$1 digitaldaemon.com...
 Here are the three least trivial mathematical functions from math2:
 cosh, sinh, tanh.
 Those functions in math2 had copyright issues, and also had several
 bugs. The implementations here are more extensively tested, faster, and
 are public domain.
 I also include cis(x) = cos + i sin, which is a fast operation on
 x86 CPUs.

Good!
 Also, productLog() and productLog_1().
 This is the inverse of x * exp(x).
 I've used the simple Mathematica names rather than more common (but
 obscure) names, LambertW() and LambertW_1(). This function crops up all
 over the place but usually goes unrecognised because it was unnamed
 until quite recently (despite being first described in the 1700s!)
 If interested, look it up in Wikipedia.

 There's also an internal function for unit testing, which builds on my
 feqrel() function.
 This is a "second order" assert in that it allows a kind of 'for all'
 operation for testing inverses. This function is not intended to be
 publicly documented, but should be useful for other functions in std.math.

Ok.
 Hopefully this moves us a lot closer to deprecating std.math2.

I want to get rid of std.math2.
 Note that several of the other trig functions in std.math2 are just
 plain wrong.
 acot(), asec() and acosec() return incorrect results.
 They should be deleted. If anyone is using them, their code is wrong!

Let's just get rid of std.math2.
 Also, sign(real x) ignores the sign of negative zero,
 so that it violates the identity

 sign(x)*abs(x) == x   when x=-0.0.

 It should never return 0, since zero has a sign in IEEE.
 It should be

 real sign(real x)
 {
    return signbit(x) ? -1.0 : 1.0;
 }

 and therefore ints should never return 0 either.

 int sign(int x)
 {
     return x<0 ? -1 : 1;
 }

 long sign(long x)
 {
     return x<0 ? -1 : 1;
 }

sign() is a redundant function, it's more trouble to document and look up than it's worth. signbit() is adequate for the job.
Sep 12 2005
parent reply Derek Parnell <derek psych.ward> writes:
On Mon, 12 Sep 2005 13:14:23 -0700, Walter Bright wrote:


[snip]
 
 Also, sign(real x) ignores the sign of negative zero,
 so that it violates the identity

 sign(x)*abs(x) == x   when x=-0.0.

 It should never return 0, since zero has a sign in IEEE.
 It should be

 real sign(real x)
 {
    return signbit(x) ? -1.0 : 1.0;
 }

 and therefore ints should never return 0 either.

 int sign(int x)
 {
     return x<0 ? -1 : 1;
 }

 long sign(long x)
 {
     return x<0 ? -1 : 1;
 }

sign() is a redundant function, it's more trouble to document and look up than it's worth. signbit() is adequate for the job.

Garbage! Its called 'abstraction'. The term 'signbit' implies that we are asking for a detail of the implementation rather than the mathematical concept of 'sign'. You could just do an 'alias sign signbit;' if you really didn't want to do a better implementation. However, there isn't a signbit() function for ints and other number types. -- Derek Parnell Melbourne, Australia 13/09/2005 7:24:32 AM
Sep 12 2005
parent reply "Walter Bright" <newshound digitalmars.com> writes:
"Derek Parnell" <derek psych.ward> wrote in message
news:y1h8ymdxkhwn.tg27eopfhs2k.dlg 40tude.net...
 Garbage! Its called 'abstraction'. The term 'signbit' implies that we are
 asking for a detail of the implementation rather than the mathematical
 concept of 'sign'.

One always must be careful when thinking about the "mathematical" concept of sign, because floating point doesn't work that way. There's the issue of negative 0, which is not really a mathematical concept, it's more of a kludge to deal with the finite precision of the floating point format. There's also the sign bit of a NaN number, what does that mean? Any time one wants to test for the signedness of a floating point number, one has to think about "what happens if it is -0" and "what happens if it is a NaN". Making a function called "sign()" can't resolve the problem. It'll pretend to, and that'll lead to bugs. If one wants to go beyond (a<0), using signbit() is the right approach, because it causes one to have to think about the other cases. The signbit() function is necessary because otherwise getting at the sign bit of the floating point value is a clumsy, nonportable hack. A sign() function does not add sufficient value, and it may even serve to obfuscate things because the reader will have to go look up the doc on it to see what it does with -0 and -NaN. std.math should strive to be an orthogonal set of nontrivial building blocks. Don's work with the arc hyperbolics are a perfect example of great additions to it.
Sep 12 2005
next sibling parent reply Derek Parnell <derek psych.ward> writes:
On Mon, 12 Sep 2005 16:59:23 -0700, Walter Bright wrote:

 "Derek Parnell" <derek psych.ward> wrote in message
 news:y1h8ymdxkhwn.tg27eopfhs2k.dlg 40tude.net...
 Garbage! Its called 'abstraction'. The term 'signbit' implies that we are
 asking for a detail of the implementation rather than the mathematical
 concept of 'sign'.

One always must be careful when thinking about the "mathematical" concept of sign, because floating point doesn't work that way. There's the issue of negative 0, which is not really a mathematical concept, it's more of a kludge to deal with the finite precision of the floating point format. There's also the sign bit of a NaN number, what does that mean?

Huh? Okay, it must be me. I'm a simple person with simple ideas. When somebody asks me what's the sign of such-and-such, I just say "Negative" if its less than zero, "Positive" if greater than zero and "Nothing" if it is zero OR not a number. What has that got to do with floating point, conceptually or physically?
 Any time one wants to test for the signedness of a floating point number,
 one has to think about "what happens if it is -0" 

is -0 less than zero? If yes, then its negative.
 and "what happens if it is a NaN". 

NaN is Not A Number so therefore it doesn't have a sign.
Making a function called "sign()" can't resolve the problem. 

I think it can ... enum signedness { Negative = -1, Nothing = 0, Positive = 1 } template getsign(TYPE) { signedness getsign( TYPE x) { if (x < 0) return signedness.Negative; if (x > 0) return signedness.Positive; return signedness.Nothing; } } alias getsign!(float) sign; alias getsign!(double) sign; alias getsign!(real) sign; alias getsign!(int) sign; alias getsign!(long) sign; alias getsign!(short) sign;
It'll
 pretend to, and that'll lead to bugs. If one wants to go beyond (a<0), using
 signbit() is the right approach, because it causes one to have to think
 about the other cases.

So does using enums.
 The signbit() function is necessary because otherwise getting at the sign
 bit of the floating point value is a clumsy, nonportable hack. 

Yes, signbit() is required to GET AT THE SIGN BIT. But that is not what I'm talking about. That is an implementation detail of the particular floating point representation system you choose to use. It has nothing to do with the simple mathematical question "what is the sign of X?", which may be a floating point, fixed point, or integer.
 A sign() function does not add sufficient value, 

Says who? You? And how come you get to be the Grand Arbiter of "sufficient value"? What empirical measurement did you apply to get to that decision? Define 'sufficient' in this context. I would say that value is in the eye of the beholder.
and it may even serve to obfuscate
 things because the reader will have to go look up the doc on it to see what
 it does with -0 and -NaN. std.math should strive to be an orthogonal set of
 nontrivial building blocks. 

Why 'nontrivial'? What may be trivial to one person is not so to others. Some people regard various sort algorithms as trivial. The library should be an enabler. -- Derek (skype: derek.j.parnell) Melbourne, Australia 13/09/2005 2:19:42 PM
Sep 12 2005
parent reply Don Clugston <dac nospam.com.au> writes:
Derek Parnell wrote:
 On Mon, 12 Sep 2005 16:59:23 -0700, Walter Bright wrote:
 
 
"Derek Parnell" <derek psych.ward> wrote in message
news:y1h8ymdxkhwn.tg27eopfhs2k.dlg 40tude.net...

Garbage! Its called 'abstraction'. The term 'signbit' implies that we are
asking for a detail of the implementation rather than the mathematical
concept of 'sign'.

One always must be careful when thinking about the "mathematical" concept of sign, because floating point doesn't work that way. There's the issue of negative 0, which is not really a mathematical concept, it's more of a kludge to deal with the finite precision of the floating point format. There's also the sign bit of a NaN number, what does that mean?

Huh? Okay, it must be me. I'm a simple person with simple ideas. When somebody asks me what's the sign of such-and-such, I just say "Negative" if its less than zero, "Positive" if greater than zero and "Nothing" if it is zero OR not a number. What has that got to do with floating point, conceptually or physically?
Any time one wants to test for the signedness of a floating point number,
one has to think about "what happens if it is -0" 

is -0 less than zero? If yes, then its negative.

Ah. It is not less than zero. The sign bit doesn't correspond exactly to mathematical sign. It's almost a 'history' bit -- did we get to zero from above or below? 1.0/+infinity = 0.0 1.0/-infinity = -0.0 and 1.0 / 0.0 = +infinity 1.0 / -0.0 = -infinity Therefore, if Given x, if we let y = sign(x) * abs(x) then with sign as you've described, sign(-0.0)=0 so y = 0.0. With IEEE, you'll have x==y. But... 1.0/x = -infinity 1.0/y = +infinity. so x and y are not the same. It is important for sign(x)*abs(x) to be the same as x. If you want to know "is it less than zero", use the built in < operator.
 Yes, signbit() is required to GET AT THE SIGN BIT. But that is not what I'm
 talking about. That is an implementation detail of the particular floating
 point representation system you choose to use. It has nothing to do with
 the simple mathematical question "what is the sign of X?", which may be a
 floating point, fixed point, or integer.

A sign() function does not add sufficient value, 

Says who? You? And how come you get to be the Grand Arbiter of "sufficient value"? What empirical measurement did you apply to get to that decision? Define 'sufficient' in this context. I would say that value is in the eye of the beholder.

and it may even serve to obfuscate
things because the reader will have to go look up the doc on it to see what
it does with -0 and -NaN. std.math should strive to be an orthogonal set of
nontrivial building blocks. 

Why 'nontrivial'? What may be trivial to one person is not so to others. Some people regard various sort algorithms as trivial. The library should be an enabler.

Have you heard the saying, "every line of code is a liability"? You have to document and maintain it. And library users have to learn all the available functions. It's a cost benefit thing. Of course it's subjective, and is Walter's opinion, but if you want to persuade him, you need to provide use cases. The cost is clear. The benefit at this stage is not. When would you use sign() ?
Sep 12 2005
parent reply Derek Parnell <derek psych.ward> writes:
On Tue, 13 Sep 2005 08:36:00 +0200, Don Clugston wrote:

 Derek Parnell wrote:

[snip]
 When somebody asks me what's the sign of such-and-such, I just say
 "Negative" if its less than zero, "Positive" if greater than zero and
 "Nothing" if it is zero OR not a number.
 


[snip]
 is -0 less than zero? If yes, then its negative.

Ah. It is not less than zero. The sign bit doesn't correspond exactly to mathematical sign.

I wasn't talking about the sign bit. That's an implementation artifact. I am talking about the mathematical sign.
 It's almost a 'history' bit -- did we get to zero from above or below?
 1.0/+infinity = 0.0
 1.0/-infinity = -0.0
 
 and
 1.0 / 0.0  = +infinity
 1.0 / -0.0 = -infinity

So -0.0 is zero that's come from the direction of -infinity.
 Therefore, if
 Given x, if we let
 y = sign(x) * abs(x)
 then with sign as you've described, sign(-0.0)=0 so y = 0.0.

What is the value of abs(-0.0)? Is it zero? If it is, then the value of sign(-0.0) is not relevant.
 With IEEE, you'll have x==y.
 But...
 1.0/x = -infinity
 1.0/y = +infinity.
 
 so x and y are not the same.
 It is important for sign(x)*abs(x) to be the same as x.

Why is that important? Why does sign() have to return -1, 0, or 1? It could return any one of three values, where one informs you that the input is negative, another informs you that it is positive, and the third that it has no mathematical sign.
 If you want to know "is it less than zero", use the built in < operator.

Yes, that is what I would do. I can't see any reason why I would need to use a sign() function. I am just arguing from a philosophical point of view.
 Yes, signbit() is required to GET AT THE SIGN BIT. But that is not what I'm
 talking about. That is an implementation detail of the particular floating
 point representation system you choose to use. It has nothing to do with
 the simple mathematical question "what is the sign of X?", which may be a
 floating point, fixed point, or integer.

A sign() function does not add sufficient value, 

Says who? You? And how come you get to be the Grand Arbiter of "sufficient value"? What empirical measurement did you apply to get to that decision? Define 'sufficient' in this context. I would say that value is in the eye of the beholder.

and it may even serve to obfuscate
things because the reader will have to go look up the doc on it to see what
it does with -0 and -NaN. std.math should strive to be an orthogonal set of
nontrivial building blocks. 

Why 'nontrivial'? What may be trivial to one person is not so to others. Some people regard various sort algorithms as trivial. The library should be an enabler.

Have you heard the saying, "every line of code is a liability"?

No, I haven't. I would hope that every line of code is an asset, even it does have a cost associated with it.
You have to document and maintain it. 

Yep, that's the cost. Pretty small one too.
 And library users have to learn all the 
 available functions.

No they don't. I only learn about the ones I need to use.
 It's a cost benefit thing. Of course it's subjective, and is Walter's 
 opinion, but if you want to persuade him, you need to provide use cases.
 The cost is clear. The benefit at this stage is not. When would you use 
 sign() ?

No, I can't see myself using it. But once it exists and tested, it's cost is microscopic and it would be appreciated by someone, therefore benefiting both the user and the librarian. -- Derek (skype: derek.j.parnell) Melbourne, Australia 13/09/2005 5:13:32 PM
Sep 13 2005
next sibling parent reply Don Clugston <dac nospam.com.au> writes:
is -0 less than zero? If yes, then its negative.

Ah. It is not less than zero. The sign bit doesn't correspond exactly to mathematical sign.

I wasn't talking about the sign bit. That's an implementation artifact. I am talking about the mathematical sign.
It's almost a 'history' bit -- did we get to zero from above or below?
1.0/+infinity = 0.0
1.0/-infinity = -0.0

and
1.0 / 0.0  = +infinity
1.0 / -0.0 = -infinity

So -0.0 is zero that's come from the direction of -infinity.

Exactly.
Given x, if we let
y = sign(x) * abs(x)
then with sign as you've described, sign(-0.0)=0 so y = 0.0.

What is the value of abs(-0.0)? Is it zero? If it is, then the value of sign(-0.0) is not relevant.
With IEEE, you'll have x==y.
But...
1.0/x = -infinity
1.0/y = +infinity.

so x and y are not the same.
It is important for sign(x)*abs(x) to be the same as x.

Why is that important? Why does sign() have to return -1, 0, or 1? It could return any one of three values, where one informs you that the input is negative, another informs you that it is positive, and the third that it has no mathematical sign.

OK, that would work. But then you have to look up the docs for sign() to find out what the values are, and it doesn't help with writing or reading code. And the value of the function drops. In the extreme case, you could have a library with functions like real plus57point36(real x) { return x + 57.36; } Trivial, and useless. A problem with trivial functions is that there are so many of them. Why should only some get library status? How can you decide which are in, and which are out? I really think you have to be confident that the functions will be useful in multiple programs. Functions which are trivial and rarely used are generally not worth the effort. But there's definitely a gray area where it is very subjective.
Sep 13 2005
parent reply Derek Parnell <derek psych.ward> writes:
On Tue, 13 Sep 2005 10:48:09 +0200, Don Clugston wrote:


[snip]

Given x, if we let
y = sign(x) * abs(x)
then with sign as you've described, sign(-0.0)=0 so y = 0.0.

What is the value of abs(-0.0)? Is it zero? If it is, then the value of sign(-0.0) is not relevant.


Am I writing with invisible ink? My point was, that if "sign(x) * abs(x) == x" must be true for all values of x, and if the value of abs(-0.0) is zero, then it is irrelevant *in this context* what the value of sign(-0.0) is because the result of sign(-0.0)*abs(-0.0) will always be zero and never -0.0
With IEEE, you'll have x==y.
But...
1.0/x = -infinity
1.0/y = +infinity.

so x and y are not the same.
It is important for sign(x)*abs(x) to be the same as x.

Why is that important? Why does sign() have to return -1, 0, or 1? It could return any one of three values, where one informs you that the input is negative, another informs you that it is positive, and the third that it has no mathematical sign.

OK, that would work. But then you have to look up the docs for sign() to find out what the values are,

What rubbish! We need to look up the docs for all functions at least once. And what's hard about doing that, especially with the whizz-bang IDEs that modern-day coders are so dependant on.
 and it doesn't help with writing or reading code.
 And the value of the function drops.

Are you trying to be difficult? How is this hard to read ... if (sign(x) == Negative) Is that not self documenting? IMNHSO, that is better than something like ... if (sign(x) == -1) because -1 is MINUS-ONE and not Negative. It might be a symbolic code that used to represent Negative, but in itself, it is not Negative, it's minus-one! Just as my 'Negative' is a symbolic representation of the concept, at least it is self documenting.
 In the extreme case, you could have a library with functions like
 
 real plus57point36(real x) { return x + 57.36; }
 
 Trivial, and useless.

Now you are insulting me. Even the hypothetical sign() function would get hugely more usage than this *extreme* example, so what's your point?
 A problem with trivial functions is that there are so many of them. Why 
 should only some get library status? 
 How can you decide which are in, and which are out? I really think you 
 have to be confident that the functions will be useful in multiple 
 programs. Functions which are trivial and rarely used are generally not 
 worth the effort. But there's definitely a gray area where it is very 
 subjective.

Isn't one of the lovely things about 'trivial' routines is that they can be inlined by the compiler and still create legible code by expressing the coders intentions better than spelling out the operation in full every time. -- Derek Parnell Melbourne, Australia 13/09/2005 9:22:35 PM
Sep 13 2005
parent reply Don Clugston <dac nospam.com.au> writes:
 
 Am I writing with invisible ink?
 
 My point was, that if "sign(x) * abs(x) == x" must be true for all values
 of x, and if the value of abs(-0.0) is zero, then it is irrelevant *in this
 context* what the value of sign(-0.0) is because the result of
 sign(-0.0)*abs(-0.0) will always be zero and never -0.0

AHA! Here's the heart of the matter. That is what you'd expect, intuitively. But intuition is misleading in this case. Indeed, abs(-0.0) returns 0.0. But -1 * 0.0 returns -0.0 whereas 1 * 0.0 returns 0.0 It does make a difference what sign(-0.0) returns. I think this is why we have this disagreement. sign(x) returning -1, 0, or 1 is intuitive, but unfortunately it's wrong for computer mathematics. There is a standard mathematical function sign(), AKA signum(), which returns -1, 0, or 1, depending on the sign. You'd normally use it in an expression, multiplying by something else. Eg: sign(x) * log(abs(x)) An important feature of this function is that sign(x)*abs(x) returns x. Unfortunately, you cannot satisfy this identity without violating your intuition (sign(0) can't return 0). And since sign(0) can't return 0, then another identity is violated: if (a==b) implies sign(a)==sign(b) will be violated if a is 0.0 and b is -0.0. Both are zero, yet they have opposite signs.
 Are you trying to be difficult? How is this hard to read ...
 
   if (sign(x) == Negative)
 
 Is that not self documenting? IMNHSO, that is better than something like
   if (sign(x) == -1) 
 because -1 is MINUS-ONE and not Negative. It might be a symbolic code 

 used to represent Negative, but in itself, it is not Negative, it's
 minus-one! Just as my 'Negative' is a symbolic representation of the
 concept, at least it is self documenting.

>
 
In the extreme case, you could have a library with functions like

real plus57point36(real x) { return x + 57.36; }

Trivial, and useless.

Now you are insulting me. Even the hypothetical sign() function would get hugely more usage than this *extreme* example, so what's your point?

I'm not intending to insult you. The point is to establish that you have to draw the line somewhere. Once we agree that there is a line, we can discuss where it lies, and that's likely to be much more fruitful.
A problem with trivial functions is that there are so many of them. Why 
should only some get library status? 
How can you decide which are in, and which are out? I really think you 
have to be confident that the functions will be useful in multiple 
programs. Functions which are trivial and rarely used are generally not 
worth the effort. But there's definitely a gray area where it is very 
subjective.

Isn't one of the lovely things about 'trivial' routines is that they can be inlined by the compiler and still create legible code by expressing the coders intentions better than spelling out the operation in full every time.

if (sign(x) == Negative) is less legible and longer than if (x<0) and it is clear what the latter does when presented with a NAN or a negative 0. If you could write an intuitive and correct sign() function, I would agree that it belongs in a library. But it's just not possible.
Sep 13 2005
parent "Walter Bright" <newshound digitalmars.com> writes:
"Don Clugston" <dac nospam.com.au> wrote in message
news:dg6o3m$1ekn$1 digitaldaemon.com...
 Am I writing with invisible ink?

 My point was, that if "sign(x) * abs(x) == x" must be true for all


 of x, and if the value of abs(-0.0) is zero, then it is irrelevant *in


 context* what the value of sign(-0.0) is because the result of
 sign(-0.0)*abs(-0.0) will always be zero and never -0.0

AHA! Here's the heart of the matter. That is what you'd expect, intuitively. But intuition is misleading in this case. Indeed, abs(-0.0) returns 0.0. But -1 * 0.0 returns -0.0 whereas 1 * 0.0 returns 0.0 It does make a difference what sign(-0.0) returns.

The following also holds with floating point math: 0 * 0 => 0 0 * -0 => -0 -0 * 0 => -0 -0 * -0 => 0 For one case where the sign of 0 is critical, consider the atan2 function: atan2(0, -0) => pi atan2(-0, -0) => -pi pow() is also sensitive to the sign of 0 for x when y is a negative odd integer - it will return plus or minus infinity.
Sep 13 2005
prev sibling parent "Walter Bright" <newshound digitalmars.com> writes:
"Derek Parnell" <derek psych.ward> wrote in message
news:3u4tyzqorpzs$.19mwmyde18c3i.dlg 40tude.net...
 What is the value of abs(-0.0)? Is it zero? If it is, then the value of
 sign(-0.0) is not relevant.

But it is relevant in certain situations. Please refer to http://www.cs.berkeley.edu/~wkahan/JAVAhurt.pdf pages 13 to 15 by Prof. Kahan, who pretty much invented IEEE 754 floating point arithmetic. The sign of 0 is carefully maintained and tracked by the underlying floating point implementation and the math functions.
 No, I can't see myself using it. But once it exists and tested, it's cost
 is microscopic and it would be appreciated by someone, therefore

 both the user and the librarian.

If the library fills up with trivial one liners, it will serve to discourage people from using it. Library functions need to each have a significant purpose. For example, consider the extreme case: int addTwo(int x) { return x + 2; } Can we agree that one would look askance at a library containing such functions? I sure would suspect that the author of such a library was trying to pump up the function count for marketing purposes. On the other hand, writing a well behaved, efficient asinh() is not at all easy, and so it makes for an ideal library function. Where's the dividing line? Of course it's a matter of judgement, and to me sign() falls on the wrong side of that line.
Sep 13 2005
prev sibling parent reply Charles Hixson <charleshixsn earthlink.net> writes:
Walter Bright wrote:
 "Derek Parnell" <derek psych.ward> wrote in message
 news:y1h8ymdxkhwn.tg27eopfhs2k.dlg 40tude.net...
 Garbage! Its called 'abstraction'. The term 'signbit' implies that we are
 asking for a detail of the implementation rather than the mathematical
 concept of 'sign'.

... Any time one wants to test for the signedness of a floating point number, one has to think about "what happens if it is -0" and "what happens if it is a NaN". Making a function called "sign()" can't resolve the problem. It'll pretend to, and that'll lead to bugs. If one wants to go beyond (a<0), using signbit() is the right approach, because it causes one to have to think about the other cases.

 
 

One could have sign() raise an exception if the concept of sign didn't really apply, and require that those cases be handled with signbit(). Or one could return +1, 0, or -1, with zero being both 0 & -0 (and either raise an exception for Nan's or shoehorn those into the 0 value also). There are reasonable ways to handle it, but there are different requirements for signbit() and sign(). signbit() specifically addresses the internal representation.
Sep 13 2005
parent reply Don Clugston <dac nospam.com.au> writes:
 One could have sign() raise an exception if the concept of sign didn't 
 really apply, and require that those cases be handled with signbit().  
 Or one could return +1, 0, or -1, with zero being both 0 & -0 (and 
 either raise an exception for Nan's or shoehorn those into the 0 value 
 also).

That's a really good point. I'd forgotten that sign() itself could return a negative zero. It should return 0 & -0 for +-NAN. Well done, guys, you've convinced me. When defined in that way, sign() has obvious uses, and is surprisingly subtle and easy to get wrong. real sign(real x) { return x>0 ? 1 : x<0 ? -1 : copysign(0, x); } behaves intuitively, and satisfies sign(x)*abs(x) is the same as x, for all x, even when x=-0.0, +-NAN. Now we have something (IMHO) well worthy of inclusion.
 There are reasonable ways to handle it, but there are different 
 requirements for signbit() and sign().  signbit() specifically addresses 
 the internal representation.

Sep 14 2005
next sibling parent reply Don Clugston <dac nospam.com.au> writes:
Don Clugston wrote:
 One could have sign() raise an exception if the concept of sign didn't 
 really apply, and require that those cases be handled with signbit().  
 Or one could return +1, 0, or -1, with zero being both 0 & -0 (and 
 either raise an exception for Nan's or shoehorn those into the 0 value 
 also).

That's a really good point. I'd forgotten that sign() itself could return a negative zero. It should return 0 & -0 for +-NAN. Well done, guys, you've convinced me. When defined in that way, sign() has obvious uses, and is surprisingly subtle and easy to get wrong. real sign(real x) { return x>0 ? 1 : x<0 ? -1 : copysign(0, x); } behaves intuitively, and satisfies sign(x)*abs(x) is the same as x, for all x, even when x=-0.0, +-NAN.

Actually, more accurate and simpler is to return +-NAN when presented with a NAN. This also satisfies the identity, because -NAN * NAN = -NAN. So actually there are six different possible return values from this function! It can be done with only a single branch. How about it, Walter? ----------------------------------- /** The mathematical sign() or signum() function. Returns 1 if x is positive, -1 if x is negative, 0 if x is zero, real.nan if x is a nan. Preserves sign of +-0 and +-nan. */ real sign(real x) { return x<>0 ? copysign(1, x) : x; } unittest { assert( sign(-2.5) == -1 ); assert( isNegZero(sign(-0.0)) ); assert( isPosZero(sign(0.0)) ); assert( sign(real.infinity)==1 ); assert( isPosNan(sign(real.nan)) ); assert( isNegNan(sign(-real.nan)) ); } // internal only bit isPosNan(real x) { return isnan(x) && !signbit(x); } bit isNegNan(real x) { return isnan(x) && signbit(x); } // for completeness int sign(int x) { return x>0 ? 1 : x<0 ? -1 : x; } long sign(long x) { return x>0 ? 1 : x<0 ? -1 : x; }
Sep 14 2005
parent Don Clugston <dac nospam.com.au> writes:
I've researched this further, and it should be called signum(), because 
it can be defined for complex numbers, whereas sign() cannot.
signum() returns a number with the same phase as z, but with unit 
magnitude, except that if z is a complex zero, it returns z.
Again, it obeys

signum(z) * abs(z) ===>  z

The code below is not correct for nans, but gives the idea.
Further work is required (need to do complex abs() at the same time).
It's amazing how something so apparently trivial becomes complicated.

creal signum(creal z)
{
    return nonzero(z) ? z/abs(z) : z;
}

// returns false if z is zero or has a nan component
bit nonzero(creal z)
{
    return !isnan(z.im) && !isnan(z.re) && z != 0 + 0i;
}


Don Clugston wrote:
 Don Clugston wrote:
 
 Actually, more accurate and simpler is to return +-NAN when presented 
 with a NAN.
 This also satisfies the identity, because -NAN * NAN = -NAN.
 So actually there are six different possible return values from this 
 function!
 It can be done with only a single branch.
 How about it, Walter?
 -----------------------------------
 
 /** The mathematical sign() or signum() function.
     Returns 1 if x is positive, -1 if x is negative,
             0 if x is zero, real.nan if x is a nan.
     Preserves sign of +-0 and +-nan.
 */
 real sign(real x)
 {
    return x<>0 ? copysign(1, x) : x;
 }
 
 unittest {
 assert( sign(-2.5) == -1 );
 assert( isNegZero(sign(-0.0)) );
 assert( isPosZero(sign(0.0)) );
 assert( sign(real.infinity)==1 );
 assert( isPosNan(sign(real.nan)) );
 assert( isNegNan(sign(-real.nan)) );
 }
 
 // internal only
 
 bit isPosNan(real x)
 {
    return isnan(x) && !signbit(x);
 }
 
 bit isNegNan(real x)
 {
    return isnan(x) && signbit(x);
 }
 
 // for completeness
 
 int sign(int x)
 {
   return x>0 ? 1 : x<0 ? -1 : x;
 }
 
 long sign(long x)
 {
   return x>0 ? 1 : x<0 ? -1 : x;
 }

Sep 14 2005
prev sibling parent reply "Walter Bright" <newshound digitalmars.com> writes:
"Don Clugston" <dac nospam.com.au> wrote in message
news:dg8jvj$a5e$1 digitaldaemon.com...
 behaves intuitively, and satisfies
 sign(x)*abs(x) is the same as x, for all x,
 even when x=-0.0, +-NAN.

NaN math follows its own logic, which is why most math functions will first check for NaN operands and do something special with them separately from the main logic. The sign bit for a NaN has no meaning, and any program that relies on it is faulty. abs(NaN) should return a NaN, not +NaN, as there really is no such thing as a + or - NaN even though the bit is there. sign(NaN) should therefore return NaN. This implies that floating point hardware may give random results for the NaN sign bit. What one must be careful of, in using things like signbit(), is that one does not generate a dependence on the sign of NaN. (sign(NaN)*abs(NaN)==NaN) must return false, even if the bit patterns match exactly, as any comparison op with either or both operands being NaN will return false.
Sep 14 2005
parent reply Don Clugston <dac nospam.com.au> writes:
Walter Bright wrote:
 "Don Clugston" <dac nospam.com.au> wrote in message
 news:dg8jvj$a5e$1 digitaldaemon.com...
 
behaves intuitively, and satisfies
sign(x)*abs(x) is the same as x, for all x,
even when x=-0.0, +-NAN.

NaN math follows its own logic, which is why most math functions will first check for NaN operands and do something special with them separately from the main logic. The sign bit for a NaN has no meaning, and any program that relies on it is faulty. abs(NaN) should return a NaN, not +NaN, as there really is no such thing as a + or - NaN even though the bit is there. sign(NaN) should therefore return NaN. This implies that floating point hardware may give random results for the NaN sign bit. What one must be careful of, in using things like signbit(), is that one does not generate a dependence on the sign of NaN. (sign(NaN)*abs(NaN)==NaN) must return false, even if the bit patterns match exactly, as any comparison op with either or both operands being NaN will return false.

Right. I realized the problem when I considered the creal case. If a creal z contains a nan and a non-nan, there is no way that signum(z)*abs(z) can be the same as z, because abs(z) must be nan. I was thinking there are some cases where a NaN is produced where the sign is still known, (eg +infinity/+infinity) but since NaNs are not comparable it doesn't really make any sense. Does D have a policy on dealing with NaNs? If a function recieves a NaN, it is required to pass on the same NaN, or is it OK to simply return real.nan? (thereby destroying any information stored in the spare NaN bits, which it seems that D is not using). The former would cause issues if a function recieves two nans -- which should be passed on? In this case the non-signed-ness of NaN doesn't make any difference to the code, just to the comments and the unit tests. I'll do a version for complex numbers and resubmit, together with abs. --------------------------- /** The mathematical sign (positive, negative, or zero) Defined such that signum(x)*abs(x) == x Returns: 1 if x is positive, -1 if x is negative, 0 if x is zero. Preserves sign of +-0. Returns NaN if x is a NaN. */ real signum(real x) { return x<>0 ? copysign(1, x) : x; } unittest { assert( signum(-2.5) == -1 ); assert( isNegZero(signum(-0.0)) ); assert( isPosZero(signum(0.0)) ); assert( signum(real.infinity)==1 ); assert( isnan(signum(real.nan)) ); }
Sep 15 2005
next sibling parent Georg Wrede <georg.wrede nospam.org> writes:
Don Clugston wrote:
 
 Walter Bright wrote:
 
 "Don Clugston" <dac nospam.com.au> wrote in message 
 news:dg8jvj$a5e$1 digitaldaemon.com...
 
 behaves intuitively, and satisfies sign(x)*abs(x) is the same as
 x, for all x, even when x=-0.0, +-NAN.

NaN math follows its own logic, which is why most math functions will first check for NaN operands and do something special with them separately from the main logic. The sign bit for a NaN has no meaning, and any program that relies on it is faulty. abs(NaN) should return a NaN, not +NaN, as there really is no such thing as a + or - NaN even though the bit is there. sign(NaN) should therefore return NaN. This implies that floating point hardware may give random results for the NaN sign bit. What one must be careful of, in using things like signbit(), is that one does not generate a dependence on the sign of NaN. (sign(NaN)*abs(NaN)==NaN) must return false, even if the bit patterns match exactly, as any comparison op with either or both operands being NaN will return false.


 I was thinking there are some cases where a NaN is produced where the
  sign is still known,  (eg +infinity/+infinity) but since NaNs are
 not comparable it doesn't really make any sense.
 
 Does D have a policy on dealing with NaNs? If a function recieves a
 NaN, it is required to pass on the same NaN, or is it OK to simply
 return real.nan? (thereby destroying any information stored in the
 spare NaN bits, which it seems that D is not using). The former would
 cause issues if a function recieves two nans -- which should be
 passed on?
 
 In this case the non-signed-ness of NaN doesn't make any difference
 to the code, just to the comments and the unit tests. I'll do a
 version for complex numbers and resubmit, together with abs.

The standard leaves so many bit patterns for NaN that theoretically we could do serious math without ever resorting to other that NaN numbers. ;-) However, looking at this from the other side, this just means that no bits in the entire NaN are significant, except for those that explicitly denote NaN. NaN exists for the sole purpose of propagating "the improperness" of a calculation result up the chain to where it can be intelligently handled, somewhat similar to exceptions in D, Java, and other languages. Therefore, the *Politically Correct* way to handle a NaN (as one or more of the arguments to the function) is to return the relevant compiler constant for NaN. In so doing the library writer explicitly acknowledges that we are propagating *nothing more* than the knowledge that this time we could not produce a numeric result -- thus invalidating also all dependent numeric calculations. Since the standard does not specify anything about the other bits, it is reasonable to assume that different brands of math processors (and most probably even different versions from the same manufacturer) return different kinds of garbage in a NaN. Similarly, passing one or more NaN values to the math processor, and of course getting a NaN as a result, may preserve the original bit pattern(s), or then again it may not. (The fancy word for this is Undefined.)
 If a function recieves a NaN, it is required to pass on the same NaN?

Let me rephrase: "If a function receives something that is not a number" ... "it is required to report this misconduct." And only that. ----- One might of course argue that it is costlier to fetch and return a constant NaN when we already have the offending one ready and at hand. IMHO that is not relevant because that only makes a difference in a tight loop (or whatever piece of code that gets run monstrous times in a short while). I reality, if that is the case, then the programmer ought to rethink his program logic anyway.
Sep 15 2005
prev sibling parent reply "Walter Bright" <newshound digitalmars.com> writes:
"Don Clugston" <dac nospam.com.au> wrote in message
news:dgbkav$ruj$1 digitaldaemon.com...
 Does D have a policy on dealing with NaNs?

Yes. It follows the IEEE 754 spec and the general convention on them.
 If a function recieves a NaN,
 it is required to pass on the same NaN, or is it OK to simply return
 real.nan? (thereby destroying any information stored in the spare NaN
 bits, which it seems that D is not using).

The function should do its best to propagate the original NaN.
 The former would cause issues
 if a function recieves two nans -- which should be passed on?

Either.
 In this case the non-signed-ness of NaN doesn't make any difference to
 the code, just to the comments and the unit tests.
 I'll do a version for complex numbers and resubmit, together with abs.
 ---------------------------

 /** The mathematical sign (positive, negative, or zero)
      Defined such that signum(x)*abs(x) == x
      Returns: 1 if x is positive,
              -1 if x is negative,
               0 if x is zero.
      Preserves sign of +-0. Returns NaN if x is a NaN.
 */
 real signum(real x)
 {
     return x<>0 ? copysign(1, x) : x;
 }

 unittest {
 assert( signum(-2.5) == -1 );
 assert( isNegZero(signum(-0.0)) );
 assert( isPosZero(signum(0.0)) );
 assert( signum(real.infinity)==1 );
 assert( isnan(signum(real.nan)) );
 }

I don't understand why this function is necessary.
Sep 15 2005
next sibling parent reply Don Clugston <dac nospam.com.au> writes:
Walter Bright wrote:
 "Don Clugston" <dac nospam.com.au> wrote in message
If a function recieves a NaN,
it is required to pass on the same NaN, or is it OK to simply return
real.nan? (thereby destroying any information stored in the spare NaN
bits, which it seems that D is not using).

The function should do its best to propagate the original NaN.
The former would cause issues
if a function recieves two nans -- which should be passed on?

Either.

Great. That clarification is very helpful.
real signum(real x)

I don't understand why this function is necessary.

Nor do I, really. copysign() achieves the same thing in every case I've been able to think of. There seemed to be a lot of objection to removing it from std.math2, though. I figured that if it was included, it should be implemented correctly. If you've unmoved by the clamour, just throw it away. It seems as though the only remaining significant _mathematical_ function in std.math2 is trunc() -- or is it already covered by floor() and ceil() ? The other copyright question surrounds hypot(). In the cephes documentation, this is the only license info I've found: " Some software in this archive may be from the book _Methods and Programs for Mathematical Functions_ (Prentice-Hall or Simon & Schuster International, 1989) or from the Cephes Mathematical Library, a commercial product. In either event, it is copyrighted by the author. What you see here may be used freely but it comes with no support or guarantee. " -- Cephes readme. Does this mean a BSD-style license? If so, then Cephes could also be used as a reference for some of the trickier math and statistic functions. (Starting with lgamma and tgamma, which are declared in std.c.math, but which cause linker errors if actually used :-) ). If not, then hypot() might be a problem. Opinion?
Sep 15 2005
parent reply "Walter Bright" <newshound digitalmars.com> writes:
"Don Clugston" <dac nospam.com.au> wrote in message
news:dgdq0d$30tl$1 digitaldaemon.com...
 Nor do I, really. copysign() achieves the same thing in every case I've
 been able to think of. There seemed to be a lot of objection to removing
 it from std.math2, though. I figured that if it was included, it should
 be implemented correctly. If you've unmoved by the clamour, just throw
 it away.

copysign() is a standard function and needs to be there. It just doesn't need to be in std.math2.
 It seems as though the only remaining significant _mathematical_
 function in std.math2 is trunc() -- or is it already covered by floor()
 and ceil() ?

trunc() is in the DMC standard library. I just need to trannsfer it.
 The other copyright question surrounds hypot(). In the cephes
 documentation, this is the only license info I've found:

 "  Some software in this archive may be from the book _Methods and
 Programs for Mathematical Functions_ (Prentice-Hall or Simon & Schuster
 International, 1989) or from the Cephes Mathematical Library, a
 commercial product. In either event, it is copyrighted by the author.
 What you see here may be used freely but it comes with no support or
 guarantee. " -- Cephes readme.

 Does this mean a BSD-style license?
 If so, then Cephes could also be used as a reference for some of the
 trickier math and statistic functions.  (Starting with lgamma and
 tgamma, which are declared in std.c.math, but which cause linker errors
 if actually used :-) ).
 If not, then hypot() might be a problem.

 Opinion?

I think the Cephes libraries are the best out there. A couple years ago, I emailed Steve Moshier, the author, about the license with the same question, and here was his response: --------------------------------------------------------------------- Thank you for writing. I have not put any restrictions on the use of material posted to the net. Bell Labs has suggested this for the files in netlib/cephes: http://www.netlib.org/research/boilerplate --------------------------------------------------------------------- Since then, I have filled in many gaps in the DMC runtime library with Steve's Cephes code. The Cephes code seems based on the old Cody&Waite algorithms, which are based on the earlier Hart&Cheney book. Cephes does have functions missing from DMC like asinh, and the only reason I hadn't incorporated them yet is because it's a fair amount of work to adapt them, and I'm lazy. The thicket of #ifdef's need to be removed, the symbols need to be converted to DMC symbols, the tables need to be converted from hex kludges to the modern hex float format, the poly calculation needs to be rewritten to use DMC's poly function, and the special cases need to be rewritten to use the x87's features.
Sep 16 2005
next sibling parent reply Sean Kelly <sean f4.ca> writes:
There's an interesting article on slashdot today about a new method for solving
trig. problems without using sin/cos/tan.  A link to info on the book is here:

http://web.maths.unsw.edu.au.nyud.net:8090/~norman/book.htm

I'll be curious to see the difference in speed and complexity of this new method
compared to the classic approach.  I don't suppose anyone has heard of this guy
before?


Sean
Sep 17 2005
parent reply Sean Kelly <sean f4.ca> writes:
In article <dghg6n$h8d$1 digitaldaemon.com>, Sean Kelly says...
There's an interesting article on slashdot today about a new method for solving
trig. problems without using sin/cos/tan.  A link to info on the book is here:

http://web.maths.unsw.edu.au.nyud.net:8090/~norman/book.htm

I'll be curious to see the difference in speed and complexity of this new method
compared to the classic approach.  I don't suppose anyone has heard of this guy
before?

Upon further reading, while the author talks up his ideas quite a lot, I don't see much that's actually revolutionary. It may turn out that the best aspect of this approach is its use as a teaching aid. Sean
Sep 17 2005
parent reply =?iso-8859-1?q?Knud_S=F8rensen?= <12tkvvb02 sneakemail.com> writes:
Sean I do not agree.

I have tried to write down some projections in his system and 
you can do them without any calculations at all just by coping 
the data around.

Knud



On Sat, 17 Sep 2005 16:39:11 +0000, Sean Kelly wrote:

 In article <dghg6n$h8d$1 digitaldaemon.com>, Sean Kelly says...
There's an interesting article on slashdot today about a new method for solving
trig. problems without using sin/cos/tan.  A link to info on the book is here:

http://web.maths.unsw.edu.au.nyud.net:8090/~norman/book.htm

I'll be curious to see the difference in speed and complexity of this new method
compared to the classic approach.  I don't suppose anyone has heard of this guy
before?

Upon further reading, while the author talks up his ideas quite a lot, I don't see much that's actually revolutionary. It may turn out that the best aspect of this approach is its use as a teaching aid. Sean

Sep 18 2005
parent reply Sean Kelly <sean f4.ca> writes:
In article <pan.2005.09.18.14.26.58.463008 sneakemail.com>,
=?iso-8859-1?q?Knud_S=F8rensen?= says...
Sean I do not agree.

I have tried to write down some projections in his system and 
you can do them without any calculations at all just by coping 
the data around.

That's what I meant by its use as a teaching aid, though I'll admit that his method of solving trig problems beats the heck out of drawing a million circles. The reason I said it's not revolutionary is simply because his method draws from established trigonometric properties--quadrance, for example. That it avoids floating point math almost entirely, however, is very nice. I need to go back and finish reading the sample chapter--I had to run out after getting through the first few pages. I'd be interested in trying some performance comparisons between his method and traditional trig. I imagine it would extend quite easily to three dimensions? Sean
Sep 18 2005
parent Sai <Sai_member pathlink.com> writes:
But I agree with sean,

This new method looks easy, as it tries to avoid square roots and trig functions
in intermediate steps. So, instead of length they use quadrance and instead of
angle thay use spread (which is actually sin of the angle, hence spread is
independent of angle being acute or obtuse, remember "all silver tea cups" ?)

So it saves time by not requiring to find lengths and angles for intermediate
steps.

However, this approach is not new at all, engineers use it all the time to solve
equations involving trig functions. By working with tan(theta/2) instead of
angles, the difference is, here the author proposes sin(theta) as the measure of
angle. And quadrance avoids square root, thats all, nothing innovative. 

As for me, I generally dont determine angles and lenghts in intermediate steps
when it is not required. instead directly use sin_theta or len_squared directly
where possible. 

I also doubt if it can be good teaching aid. I really think angle is more easy
to understand than spread, and length is more natural to think than quadrance,
even for a math bigginer.

This new method is nothing but a interesting outcome of re-defining basic
objects in geometry. Something like re-defining fundamental units as [force,
volume, velocity] instead of [mass, length, time]

Sai



In article <dgk3ec$2p03$1 digitaldaemon.com>, Sean Kelly says...
In article <pan.2005.09.18.14.26.58.463008 sneakemail.com>,
=?iso-8859-1?q?Knud_S=F8rensen?= says...
Sean I do not agree.

I have tried to write down some projections in his system and 
you can do them without any calculations at all just by coping 
the data around.


Sep 18 2005
prev sibling parent reply Don Clugston <dac nospam.com.au> writes:
Walter Bright wrote:
 "Don Clugston" <dac nospam.com.au> wrote in message
 news:dgdq0d$30tl$1 digitaldaemon.com...
 
Nor do I, really. copysign() achieves the same thing in every case I've
been able to think of. There seemed to be a lot of objection to removing
it from std.math2, though. I figured that if it was included, it should
be implemented correctly. If you've unmoved by the clamour, just throw
it away.

copysign() is a standard function and needs to be there. It just doesn't need to be in std.math2.

I meant sign() could be removed from std.math2, not copysign(). I've found copysign() to be very useful. In general c = sign(a)*func(b) can always be replaced with: c = copysign(a, func(b)); which is also more efficient than a multiply. copysign() is already in std.math.
 trunc() is in the DMC standard library. I just need to trannsfer it.

real trunc(real x) { return std.c.math.truncl(x); } seems to work. Is it that simple?
 
 I think the Cephes libraries are the best out there. A couple years ago, I
 emailed Steve Moshier, the author, about the license with the same question,
 and here was his response:
 
 ---------------------------------------------------------------------
 Thank you for writing. I have not put any restrictions on the use of
 material posted to the net. Bell Labs has suggested this for the files
 in netlib/cephes:
 http://www.netlib.org/research/boilerplate
 ---------------------------------------------------------------------

Excellent!
 Since then, I have filled in many gaps in the DMC runtime library with
 Steve's Cephes code. The Cephes code seems based on the old Cody&Waite
 algorithms, which are based on the earlier Hart&Cheney book.
 
 Cephes does have functions missing from DMC like asinh, and the only reason
 I hadn't incorporated them yet is because it's a fair amount of work to
 adapt them, and I'm lazy.

And you've got more important things to do :-). That stuff can be done by people like me, who can't contribute to the compiler. As I get the time, I hope to port some of those functions from Cephes to DMD. I imagine it is be quite easy to port DMD functions to DMC. The thicket of #ifdef's need to be removed, the
 symbols need to be converted to DMC symbols, the tables need to be converted
 from hex kludges to the modern hex float format, the poly calculation needs
 to be rewritten to use DMC's poly function, and the special cases need to be
 rewritten to use the x87's features.

poly[] is the other function from std.math2 which needs to be redone in std.math. Presumably the order should be a[2]*x^2 + a[1]*x + a[0] rather than the Cephes format a[0] * x^2 + a[1]*x + a[2] And there should also be a version with a signature like: real poly(real x, real [] coeffs...) The min(), max() and avg() functions in std.math2 should be redone as templates anyway, so I don't think there's any need to retain std.math2 now. Move to etc?
Sep 19 2005
parent "Walter Bright" <newshound digitalmars.com> writes:
"Don Clugston" <dac nospam.com.au> wrote in message
news:dglr4j$158q$1 digitaldaemon.com...
 Walter Bright wrote:
 trunc() is in the DMC standard library. I just need to trannsfer it.

real trunc(real x) { return std.c.math.truncl(x); } seems to work. Is it that simple?

Yes. The only issue is for C libraries that don't have a truncl().
 poly[] is the other function from std.math2 which needs to be redone in
 std.math. Presumably the order should be
 a[2]*x^2 + a[1]*x + a[0]
 rather than the Cephes format
 a[0] * x^2 + a[1]*x + a[2]

That's right. There's already one in the DMC library that is hand-built 8087 code.
 And there should also be a version with a signature like:

 real poly(real x, real [] coeffs...)

Yes.
 The min(), max() and avg() functions in std.math2 should be redone as
 templates anyway, so I don't think there's any need to retain std.math2
 now. Move to etc?

Yes, or just abandon it entirely.
Sep 19 2005
prev sibling parent Don Clugston <dac nospam.com.au> writes:
On second thoughts, cis should be renamed to exp(ireal).
Start to take advantage of the inbuilt support for imaginary numbers!
BTW, I was delighted to find that the .im and .re properties are
supported for real and ireal, despite it not being mentioned anywhere
in the docs.
Corollary to The First Rule of D: It's always better than you think.

--------------------

//  exp(i theta) = cos(theta) + sin(theta)i.
version (X86)
{
     // On x86, this is almost as fast as sin(x).
     creal exp(ireal x)
     {
      asm {
          fld x;
          fsincos;
          fxch st(1), st(0);
       }
     }
} else {
     creal exp(ireal x)
     {
       return cos(x.im) + sin(x.im)*1i;
     }
}

unittest {
     assert(exp(0i)==1+0i);
     assert(exp(1.3e5i)==cos(1.3e5)+sin(1.3e5)*1i);
     creal c = exp(ireal.nan);
     assert(isnan(c.re) && isnan(c.im));
     c = exp(ireal.infinity);
     assert(isnan(c.re) && isnan(c.im));
}
Sep 16 2005