www.digitalmars.com         C & C++   DMDScript  

D - D complex vs C++ std::complex

reply "Walter" <walter digitalmars.com> writes:
I wrote this in response to repeated questions about it.
Comments/criticisms?

www.digitalmars.com/d/cppcomplex.html
Aug 25 2003
next sibling parent reply "Matthew Wilson" <matthew stlsoft.org> writes:
"Walter" <walter digitalmars.com> wrote in message
news:bier5s$etm$1 digitaldaemon.com...
 I wrote this in response to repeated questions about it.
 Comments/criticisms?

 www.digitalmars.com/d/cppcomplex.html
 ireal a, b, c;
 c = a + b;
 In C++, it is two adds, as the real parts get added too.
explain, please
 Multiply is worse, as 4 multiplies and two adds are done instead of one
multiply. explain, please
 Divide is the worst - D has one divide, whereas C++ implements complex
division with typically one comparison, 3 divides, 3 multiplies and 3 additions. explain, please The rest of it seems quite compelling, as long as it's true, for which I have every faith. :)
Aug 26 2003
parent reply "Walter" <walter digitalmars.com> writes:
 ireal a, b, c;
 c = a + b;
 In C++, it is two adds, as the real parts get added too.
explain, please
 Multiply is worse, as 4 multiplies and two adds are done instead of one
multiply. explain, please
 Divide is the worst - D has one divide, whereas C++ implements complex
division with typically one comparison, 3 divides, 3 multiplies and 3 additions. explain, please
Ok, check out the new page! www.digitalmars.com/d/cppcomplex.html
 The rest of it seems quite compelling, as long as it's true, for which I
 have every faith. :)
If you don't believe me, check out Prof. Kahan's referenced papers!
Aug 26 2003
parent "Matthew Wilson" <matthew stlsoft.org> writes:
"Walter" <walter digitalmars.com> wrote in message
news:bihjbf$1kv6$1 digitaldaemon.com...
 ireal a, b, c;
 c = a + b;
 In C++, it is two adds, as the real parts get added too.
explain, please
 Multiply is worse, as 4 multiplies and two adds are done instead of
one
 multiply.

 explain, please

 Divide is the worst - D has one divide, whereas C++ implements complex
division with typically one comparison, 3 divides, 3 multiplies and 3 additions. explain, please
Ok, check out the new page! www.digitalmars.com/d/cppcomplex.html
Get it posted up on c.l.c.m!
 The rest of it seems quite compelling, as long as it's true, for which I
 have every faith. :)
If you don't believe me, check out Prof. Kahan's referenced papers!
Twas never in doubt, E.C.W. (esteemed compiler-walter)
Aug 27 2003
prev sibling parent reply "Sean L. Palmer" <palmer.sean verizon.net> writes:
Sorry, Walter, had to do it:  ;)

Syntactical Aesthetics:

Mostly irrelevant.  Anyone using this in reality would make a typedef for
std::complex<long double> instead of spelling it out constantly.  First step
in writing portable programs is centralizing declarations.  In reality it
wouldn't be so bad for the C++ programmer.  I know;  I've done this sort of
thing for years in C++, with templates and without.  Certainly looks more
like real math written the D way, but that method of defining values is not
indefinitely extendable nor user extendable.

Besides you don't mention how to construct an imaginary long double literal,
or imaginary float literal using a suffix.  Does that work?

Efficiency:

So, you finally see the logic behind my requests for builtin small
fixed-size vector and quaternion types.

In theory, it should be possible to design a type system whereby you could
return a type appropriate to the specific values of the inputs, thus
optimizing away unnecessary calculations.  This is painstaking and tedious
in C++;  you have to use overloading *heavily*, and manually specify many
many combinations, then face "ambiguous conversion" errors.  Perhaps OCaml
has a better way to say these sort of things?

We currently have to "go to the metal" to get performance out of these sort
of values.  You know as well as I do that compilers can't optimize flow
through assembly very well.  It's often a lose-lose situation.

Division behavior:  be careful here, as optimizing divides can leave
theoreticians and serious number crunchers upset about precision loss.
Personally I'm all for speed however, and turning divides into multiply by
reciprocals is generally good enough for my purposes.

Surely you could optimize a simple user-defined struct in this manner?  If
part of it is always a certain value, and the compiler knows this and can
verify it through computational flow, it can disregard results of operations
that are never used, and discard operations that are unnecessary.  If this
is implemented by the optimizer in a general way, all programs may benefit.
Giving us a single type that does it just teases us.  ;)

Semantics:

I think this guy is crazy.  Bone up on Geometric Algebra;  all the
functionality is in the operations, not the data storage format.  If C++ is
broken, fine, fix it.  But we all know that 0 + 1i = i and 1 + 0i = 1, so
results of operations on the two values (1 + 0i  vs.  1) should be
indistinguishable.  Storage format really doesn't matter;  it's all about
what happens during the operations and how you define said operations
(multiply, log, sqrt) and primitive values (the real numbers).

If there are sign issues, it's either a problem in the implementation of the
operations, or of the input values, or of the algorithm.  I'm not sure what
he was trying to do but I'm pretty confident it was either the first or the
last of those three.  Probably the first.

In conclusion, I don't think an "imaginary" type is strictly necessary for
correctness, nor do I believe that the implementation should favor two such
"axes" and ignore the plethora of other possible axes (you might think of
these as "units" or as separate types if you wish).  If you build in
imaginary, pretty soon people will be asking for (or kludgily building using
the available language tools) "x" "y" and "z" axis types, "length" and
"time" units, ad infinitum.

Sean

"Walter" <walter digitalmars.com> wrote in message
news:bier5s$etm$1 digitaldaemon.com...
 I wrote this in response to repeated questions about it.
 Comments/criticisms?

 www.digitalmars.com/d/cppcomplex.html
Aug 26 2003
next sibling parent John Reimer <jjreimer telus.net> writes:
 Besides you don't mention how to construct an imaginary long double literal,
 or imaginary float literal using a suffix.  Does that work?
Good question.
 Semantics:
 
 I think this guy is crazy.  Bone up on Geometric Algebra;  all the
 functionality is in the operations, not the data storage format.  If C++ is
 broken, fine, fix it.  But we all know that 0 + 1i = i and 1 + 0i = 1, so
 results of operations on the two values (1 + 0i  vs.  1) should be
 indistinguishable.  Storage format really doesn't matter;  it's all about
 what happens during the operations and how you define said operations
 (multiply, log, sqrt) and primitive values (the real numbers).
Earlier, I did argue in favor of distinct imaginary variables, but, in truth, I can't argue yes or no with my level of knowledge. This I do know: 1) Being able to express imaginary literals is refreshingly easy in D. 2) C++'s method of adding an imaginary to an expression: + complex<long double>(0,7) is painfully laborious. And so it would be for any other mathematical type that might be implemented as a template. I wouldn't like to see this happen in D, not, at least, with complex numbers. 3) D's way of being able to declare a imaginary variable is just kind of neat because even if I can express a complex imaginary as: cdouble = 2i; // can I do this? or idouble = 2i; // the same as above but imaginary only instead of being forced: cdouble = 0 + 2i; it's still "feels good" to separate the two parts (real and imaginary) into variables in some situations. It may also be a readability thing. Since you have two parts of a complex number, it's "kind of nice" to be able to specify that part as a variable. I'm afraid this is the weak extent of my argument.
 If there are sign issues, it's either a problem in the implementation of the
 operations, or of the input values, or of the algorithm.  I'm not sure what
 he was trying to do but I'm pretty confident it was either the first or the
 last of those three.  Probably the first.
I tried understanding the professor's complaints, but the points escaped me (since I don't think of or deal with complex numbers on that level to realize such problems). Implementation details are far beyond me at this point.
 In conclusion, I don't think an "imaginary" type is strictly necessary for
 correctness, nor do I believe that the implementation should favor two such
 "axes" and ignore the plethora of other possible axes (you might think of
 these as "units" or as separate types if you wish).  If you build in
 imaginary, pretty soon people will be asking for (or kludgily building using
 the available language tools) "x" "y" and "z" axis types, "length" and
 "time" units, ad infinitum.
I still don't understand this. Complex numbers are not the same thing as vectors, no matter how similar in some ways. They are a special set of numbers, nonetheless, and common enough that most languages like python, Fortran, and others have them implemented (argumentum ad populorum :P). Vectors don't fit that category. So far as I can see, vectors may as well be implemented as they always have been, which allows maximum flexibility to the programmer. Complex are fairly static in their definition (don't have varying number of dimensions) so it's a safe implementation. But I'm really not one to speak, ha! Oh, by the way, Walter, can we have a set 2D, 3D, and 4D matrix variables too in D, just like the new OpenGl shader langauge? ;-D So fun, so fun! John
Aug 27 2003
prev sibling parent reply "Walter" <walter digitalmars.com> writes:
"Sean L. Palmer" <palmer.sean verizon.net> wrote in message
news:bihj7g$1kri$1 digitaldaemon.com...
 Sorry, Walter, had to do it:  ;)

 Syntactical Aesthetics:

 Mostly irrelevant.  Anyone using this in reality would make a typedef for
 std::complex<long double> instead of spelling it out constantly.
I think they are poor *standard* types if the first thing the programmer will want to do is typedef them into being something more palatable.
 First step
 in writing portable programs is centralizing declarations.  In reality it
 wouldn't be so bad for the C++ programmer.  I know;  I've done this sort
of
 thing for years in C++, with templates and without.
There's no easy way to get around the lack of imaginary literal syntax.
 Certainly looks more
 like real math written the D way, but that method of defining values is
not
 indefinitely extendable nor user extendable.
Int's aren't user-extendible, either <g>.
 Besides you don't mention how to construct an imaginary long double
literal,
 or imaginary float literal using a suffix.  Does that work?
Yes: ireal x = 5.0 + 14.567Li
 Efficiency:

 So, you finally see the logic behind my requests for builtin small
 fixed-size vector and quaternion types.

 In theory, it should be possible to design a type system whereby you could
 return a type appropriate to the specific values of the inputs, thus
 optimizing away unnecessary calculations.  This is painstaking and tedious
 in C++;  you have to use overloading *heavily*, and manually specify many
 many combinations, then face "ambiguous conversion" errors.  Perhaps OCaml
 has a better way to say these sort of things?

 We currently have to "go to the metal" to get performance out of these
sort
 of values.  You know as well as I do that compilers can't optimize flow
 through assembly very well.  It's often a lose-lose situation.

 Division behavior:  be careful here, as optimizing divides can leave
 theoreticians and serious number crunchers upset about precision loss.
 Personally I'm all for speed however, and turning divides into multiply by
 reciprocals is generally good enough for my purposes.
There are many ways to express complex division, some certainly better than others, but they all involve a lot more than a single divide, which was all I was trying to show.
 Surely you could optimize a simple user-defined struct in this manner?  If
 part of it is always a certain value, and the compiler knows this and can
 verify it through computational flow, it can disregard results of
operations
 that are never used, and discard operations that are unnecessary.  If this
 is implemented by the optimizer in a general way, all programs may
benefit.
 Giving us a single type that does it just teases us.  ;)
Yet the compiler does not know this. Consider passing a value to a function. The function cannot know that the real part is always 0.
 Semantics:

 I think this guy is crazy.  Bone up on Geometric Algebra;  all the
 functionality is in the operations, not the data storage format.  If C++
is
 broken, fine, fix it.  But we all know that 0 + 1i = i and 1 + 0i = 1, so
 results of operations on the two values (1 + 0i  vs.  1) should be
 indistinguishable.
They aren't with IEEE arithmetic because of the sign of 0.
 Storage format really doesn't matter;  it's all about
 what happens during the operations and how you define said operations
 (multiply, log, sqrt) and primitive values (the real numbers).
Prof. Kahan isn't crazy, in fact, he is the prime mover behind IEEE 754 arithmetic. There really isn't anyone who knows floating point better than him. The trouble comes not from the mathematics, but from imperfections in how floating point is implemented. In dealing with complex numbers, you have a thing called "branch cuts" which wind up relying on the sign of 0. Carrying along an extra 0 for the real part winds up getting the signs wrong.
 If there are sign issues, it's either a problem in the implementation of
the
 operations, or of the input values, or of the algorithm.  I'm not sure
what
 he was trying to do but I'm pretty confident it was either the first or
the
 last of those three.  Probably the first.
 In conclusion, I don't think an "imaginary" type is strictly necessary for
 correctness, nor do I believe that the implementation should favor two
such
 "axes" and ignore the plethora of other possible axes (you might think of
 these as "units" or as separate types if you wish).  If you build in
 imaginary, pretty soon people will be asking for (or kludgily building
using
 the available language tools) "x" "y" and "z" axis types, "length" and
 "time" units, ad infinitum.
It's too bad that the analysis of what exactly is going wrong isn't on his web page, but in his book.
Aug 27 2003
next sibling parent reply John Reimer <jjreimer telus.net> writes:
 It's too bad that the analysis of what exactly is going wrong isn't on his
 web page, but in his book.
 
 
It is an interesting analysis that seems to warrant a real study. I've heard you mention the professor before, but never really understood the topics importance. What's the name of the book again? Later, John
Aug 27 2003
parent reply "Walter" <walter digitalmars.com> writes:
"John Reimer" <jjreimer telus.net> wrote in message
news:bihsoe$24tq$1 digitaldaemon.com...
 It's too bad that the analysis of what exactly is going wrong isn't on
his
 web page, but in his book.
It is an interesting analysis that seems to warrant a real study. I've heard you mention the professor before, but never really understood the topics importance. What's the name of the book again?
" Branch Cuts for Complex Elementary Functions, or Much Ado About Nothing's Sign Bit " by W. Kahan, ch. 7 in The State of the Art in Numerical Analysis ( 1987 ) ed. by M. Powell and A. Iserles for Oxford U.P.
Aug 27 2003
parent John Reimer <jjreimer telus.net> writes:
 
 " Branch Cuts for Complex Elementary Functions, or Much Ado About Nothing's
 Sign Bit " by W. Kahan, ch.
 
 7 in The State of the Art in Numerical Analysis ( 1987 ) ed. by M. Powell
 and A. Iserles for Oxford U.P.
 
 
Thanks, I see now that it is one of the references listed in the references! Oops! :-P So much for my observation skills...
Aug 27 2003
prev sibling parent reply "Ben Hinkle" <bhinkle4 juno.com> writes:
[...snip...]
 Semantics:

 I think this guy is crazy.  Bone up on Geometric Algebra;  all the
 functionality is in the operations, not the data storage format.  If C++
is
 broken, fine, fix it.  But we all know that 0 + 1i = i and 1 + 0i = 1,
so
 results of operations on the two values (1 + 0i  vs.  1) should be
 indistinguishable.
They aren't with IEEE arithmetic because of the sign of 0.
 Storage format really doesn't matter;  it's all about
 what happens during the operations and how you define said operations
 (multiply, log, sqrt) and primitive values (the real numbers).
Prof. Kahan isn't crazy, in fact, he is the prime mover behind IEEE 754 arithmetic. There really isn't anyone who knows floating point better than him. The trouble comes not from the mathematics, but from imperfections in how floating point is implemented. In dealing with complex numbers, you
have
 a thing called "branch cuts" which wind up relying on the sign of 0.
 Carrying along an extra 0 for the real part winds up getting the signs
 wrong.
I think Kahan's point with the examples he uses (the Curmudgeon paper) is to show that you can't ignore the sign of 0 when dealing with branch cuts. See also, for example, http://www.cs.berkeley.edu/~wkahan/MathH90/S26Aug02.pdf. I haven't read C99's Annex G (the part of the C99 spec that deals with the optional support of pure imaginary types) but my impression was that pure imaginary is for efficiency and deals with things like imaginary Infs and NaNs better than general complex Inf and NaN. To quote from http://grouper.ieee.org/groups/754/meeting-minutes/01-07-18.html: Annex G specifies three imaginary types: float, double, and long double imaginary Operands not promoted to a common type domain (real, imaginary, complex) e.g. r(u + vi) = ru + rvi, not (r + 0i)(u + vi) provides natural efficiency and better treatment of special values e.g. i i = , not (1 + 0i)(0 + i)(1 + 0i)(0 + i) = NaN + NaNi Infinity properties for z nonzero and finite inf*z=inf inf*inf=inf inf/z=inf inf/0=inf z/inf=0 0/inf=0 z/0=inf |inf|=inf even for complex and imaginary z, 0s, and infinities a complex value with at least one infinite part is regarded as infinite (even if the other part is NaN) Nothing about branch cuts there, from what I can tell. Oh, and why does it say i*i is NaN+NaNi? that seems wierd. I would think ((+0)+i)*((+0)+i) = ((+0)*(+0) - 1*1) + ((+0)*1 + 1*(+0))i = ((+0)-1) + ((+0) + (+0))i = -1 + (+0)i I'd like D to include a complex math library that respects the sign of 0 in arithmetic and branch cuts but doesn't include pure imaginary in the basic language. This is what C99 does. What, by the way, is the exact arithmetic of pure imaginary type and how does it behave when combined with complex or real numbers? Is the proposal to follow C99 + Annex G?
 If there are sign issues, it's either a problem in the implementation of
the
 operations, or of the input values, or of the algorithm.  I'm not sure
what
 he was trying to do but I'm pretty confident it was either the first or
the
 last of those three.  Probably the first.
 In conclusion, I don't think an "imaginary" type is strictly necessary
for
 correctness, nor do I believe that the implementation should favor two
such
 "axes" and ignore the plethora of other possible axes (you might think
of
 these as "units" or as separate types if you wish).  If you build in
 imaginary, pretty soon people will be asking for (or kludgily building
using
 the available language tools) "x" "y" and "z" axis types, "length" and
 "time" units, ad infinitum.
It's too bad that the analysis of what exactly is going wrong isn't on his web page, but in his book.
Aug 27 2003
parent reply "Walter" <walter digitalmars.com> writes:
"Ben Hinkle" <bhinkle4 juno.com> wrote in message
news:bija5k$1aja$1 digitaldaemon.com...
 [...snip...]
 I think Kahan's point with the examples he uses (the Curmudgeon paper) is
to
 show that you can't ignore the sign of 0 when dealing with branch cuts.
See
 also, for example,
http://www.cs.berkeley.edu/~wkahan/MathH90/S26Aug02.pdf.
 I haven't read C99's Annex G (the part of the C99 spec that deals with the
 optional support of pure imaginary types) but my impression was that pure
 imaginary is for efficiency and deals with things like imaginary Infs and
 NaNs better than general complex Inf and NaN.

 To quote from
 http://grouper.ieee.org/groups/754/meeting-minutes/01-07-18.html:
 Annex G  specifies three imaginary types: float, double, and long double
 imaginary
 Operands not promoted to a common type domain (real, imaginary, complex)
 e.g. r(u + vi) = ru + rvi, not (r + 0i)(u + vi) provides natural
efficiency
 and better treatment of special values  e.g. i i = , not (1 + 0i)(0 + i)(1
+
 0i)(0 + i) = NaN + NaNi
 Infinity properties for z nonzero and finite
 inf*z=inf    inf*inf=inf inf/z=inf    inf/0=inf
   z/inf=0      0/inf=0     z/0=inf      |inf|=inf
 even for complex and imaginary z, 0s, and infinities a complex value with
at
 least one infinite part is regarded as infinite (even if the other part is
 NaN)

 Nothing about branch cuts there, from what I can tell.
Right, that's a different issue.
 Oh, and why does it say i*i is NaN+NaNi? that seems wierd. I would think
  ((+0)+i)*((+0)+i)
              = ((+0)*(+0) - 1*1) + ((+0)*1 + 1*(+0))i
              = ((+0)-1) + ((+0) + (+0))i
              = -1 + (+0)i
Something is wierd about that example, I think something was dropped from it by some font conversion. ii= ??
 I'd like D to include a complex math library that respects the sign of 0
in
 arithmetic and branch cuts but doesn't include pure imaginary in the basic
 language. This is what C99 does.
Yes, D follows C99's lead (but C99 does have a pure imaginary).
 What, by the way, is the exact arithmetic of pure imaginary type and how
 does it behave when combined with complex or real numbers? Is the proposal
 to follow C99 + Annex G?
Yes. The C99 numerics folks know what they're doing.
Aug 27 2003
parent reply "Ben Hinkle" <bhinkle4 juno.com> writes:
"Walter" <walter digitalmars.com> wrote in message
news:bijs35$25b0$1 digitaldaemon.com...
 "Ben Hinkle" <bhinkle4 juno.com> wrote in message
 news:bija5k$1aja$1 digitaldaemon.com...
 [...snip...]
 I think Kahan's point with the examples he uses (the Curmudgeon paper)
is
 to
 show that you can't ignore the sign of 0 when dealing with branch cuts.
See
 also, for example,
http://www.cs.berkeley.edu/~wkahan/MathH90/S26Aug02.pdf.
 I haven't read C99's Annex G (the part of the C99 spec that deals with
the
 optional support of pure imaginary types) but my impression was that
pure
 imaginary is for efficiency and deals with things like imaginary Infs
and
 NaNs better than general complex Inf and NaN.

 To quote from
 http://grouper.ieee.org/groups/754/meeting-minutes/01-07-18.html:
 Annex G  specifies three imaginary types: float, double, and long double
 imaginary
 Operands not promoted to a common type domain (real, imaginary, complex)
 e.g. r(u + vi) = ru + rvi, not (r + 0i)(u + vi) provides natural
efficiency
 and better treatment of special values  e.g. i i = , not (1 + 0i)(0 +
i)(1
 +
 0i)(0 + i) = NaN + NaNi
 Infinity properties for z nonzero and finite
 inf*z=inf    inf*inf=inf inf/z=inf    inf/0=inf
   z/inf=0      0/inf=0     z/0=inf      |inf|=inf
 even for complex and imaginary z, 0s, and infinities a complex value
with
 at
 least one infinite part is regarded as infinite (even if the other part
is
 NaN)

 Nothing about branch cuts there, from what I can tell.
Right, that's a different issue.
 Oh, and why does it say i*i is NaN+NaNi? that seems wierd. I would think
  ((+0)+i)*((+0)+i)
              = ((+0)*(+0) - 1*1) + ((+0)*1 + 1*(+0))i
              = ((+0)-1) + ((+0) + (+0))i
              = -1 + (+0)i
Something is wierd about that example, I think something was dropped from
it
 by some font conversion. ii= ??

 I'd like D to include a complex math library that respects the sign of 0
in
 arithmetic and branch cuts but doesn't include pure imaginary in the
basic
 language. This is what C99 does.
Yes, D follows C99's lead (but C99 does have a pure imaginary).
From what I can tell C99 doesn't require the implementation have a pure imaginary type. Annex G is informative.
 What, by the way, is the exact arithmetic of pure imaginary type and how
 does it behave when combined with complex or real numbers? Is the
proposal
 to follow C99 + Annex G?
Yes. The C99 numerics folks know what they're doing.
Well, there are some pitfalls with following ieee 754 is you really care about the sign of 0. See for example: http://www.concentric.net/~Ttwang/tech/javafloat.htm the section Wrong Way to Implement abs(). Also notice that since (-0) + (+0) = +0 then +0 can't be used as an "additive unit", meaning x + (+0) = x for all x. So suddenly, if you care about -0, you have to figure out all the places you might be adding 0 to expressions and make sure they are avoided (or look at them carefully). What a pain. Here's a guy who has an anti-754 rant with a few interesting points: http://www.my.netbsd.org/People/Pages/ross-essays.html Ahh, if only ieee754 had a true zero the only argument for pure imaginary would be efficiency. oh well. -Ben
Aug 29 2003
parent "Walter" <walter digitalmars.com> writes:
"Ben Hinkle" <bhinkle4 juno.com> wrote in message
news:binhp0$1pe5$1 digitaldaemon.com...
 Yes. The C99 numerics folks know what they're doing.
Well, there are some pitfalls with following ieee 754 is you really care about the sign of 0. See for example: http://www.concentric.net/~Ttwang/tech/javafloat.htm the section Wrong Way to Implement abs(). Also notice that since (-0) + (+0) = +0 then +0 can't be used as an "additive unit", meaning x + (+0) = x for all
x.
 So suddenly, if you care about -0, you have to figure out all the places
you
 might be adding 0 to expressions and make sure they are avoided (or look
at
 them carefully). What a pain.

 Here's a guy who has an anti-754 rant with a few interesting points:
 http://www.my.netbsd.org/People/Pages/ross-essays.html

 Ahh, if only ieee754 had a true zero the only argument for pure imaginary
 would be efficiency. oh well.

 -Ben
Those are some very interesting articles. On the other hand, the FPU hardware we're dealing with is IEEE 754, so might as well embrace it <g>.
Aug 29 2003