www.digitalmars.com         C & C++   DMDScript  

digitalmars.D - Real Close to the Machine: Floating Point in D

reply Walter Bright <newshound1 digitalmars.com> writes:
Don Clugston writes about floating point programming in D:

http://www.digitalmars.com/d/2.0/d-floating-point.html

http://www.reddit.com/r/programming/comments/8j10v/real_close_to_the_machine_floating_point_in_d/
May 08 2009
next sibling parent reply bearophile <bearophileHUGS lycos.com> writes:
Thanks to Don for the very nice explanation. I'll need more time to digest it
all.

I can see that D2 is one of the best (maybe the best at the moment) language
regarding its management of floating point values. This means that people
interested in doing numerical FP computations may become interested in D2. But
for them D may enjoy becoming a bit less "system language" and a bit more
"scientific language", especially regarding tiling loops, matrix and domain
management, automatic management of some data parallelism that is currently
nearly totally absent, etc. (see Chapel language).


A more advanced usage, not yet supported on any platform(!) is to provide a
nested function to be used as a hardware exception handler.<
I didn't know this.
real NaN(ulong payload) -- create a NaN with a "payload", where the payload is
a ulong.<
- How many bit can reliably be stored there? - That function returns a real, but can payloads can be stored in doubles too? - If doubles too can be used, how many bits long can be such payload? - Can Don give me one or two example usages of such payloads? (I have never seen them used so far in any language.) Bye and thank you, bearophile
May 08 2009
parent Don <nospam nospam.com> writes:
bearophile wrote:
 Thanks to Don for the very nice explanation. I'll need more time to digest it
all.
 
 I can see that D2 is one of the best (maybe the best at the moment) language
regarding its management of floating point values. This means that people
interested in doing numerical FP computations may become interested in D2. But
for them D may enjoy becoming a bit less "system language" and a bit more
"scientific language", especially regarding tiling loops, matrix and domain
management, automatic management of some data parallelism that is currently
nearly totally absent, etc. (see Chapel language).
 
 A more advanced usage, not yet supported on any platform(!) is to provide a
nested function to be used as a hardware exception handler.<
I didn't know this.
The hardware supports it, but unfortunately, there's no standardisation between OSes. Even on x86, it's requires a totally different technique for Windows compared to Linux (it's easier on Windows). And there's a problem with x87: given FLOP1 xxx FLOP2 where FLOP1 and FLOP2 are floating-point operations, if FLOP1 generates an exception condition, the exception doesn't actually happen until FLOP2! So the intervening xxx instructions are a big problem. So it's usable only in very limited circumstances. Most likely, we won't be able to do more than provide a few library functions that make use of it.
 real NaN(ulong payload) -- create a NaN with a "payload", where the payload is
a ulong.<
- How many bit can reliably be stored there?
It's in the docs for NaN().
 - That function returns a real, but can payloads can be stored in doubles too?
Yes.
 - If doubles too can be used, how many bits long can be such payload?
 - Can Don give me one or two example usages of such payloads? (I have never
seen them used so far in any language.)
 
 Bye and thank you,
 bearophile
May 08 2009
prev sibling next sibling parent Lutger <lutger.blijdestijn gmail.com> writes:
At www.digitalmars.com/d it's not listed in the articles section.
May 09 2009
prev sibling next sibling parent Fawzi Mohamed <fmohamed mac.com> writes:
On 2009-05-09 01:14:56 +0200, Walter Bright <newshound1 digitalmars.com> said:

 Don Clugston writes about floating point programming in D:
 
 http://www.digitalmars.com/d/2.0/d-floating-point.html
 
 http://www.reddit.com/r/programming/comments/8j10v/real_close_to_the_machine_floating_point_in_d/
Nice
 
article Don! on comparing floating points if you have complex operation in between (like diagonalizations,... or even a simple dot product) I often use int feqrel2(T)(T x,T y){ static if(isComplexType!(T)){ return min(feqrel2(x.re,y.re),feqrel2(x.im,y.im)); } else { const T shift=ctfe_powI(0.5,T.mant_dig/4); if (x<0){ return feqrel(x-shift,y-shift); } else { return feqrel(x+shift,y+shift); } } } because if you sum (as you say later) the absolute error is added, and the relative one can become very large. If the result is close to 0 (where the density of floats is very high), or even 0 by chance, you cannot expect to keep the relative error to be small, so I cap the relative error close to 0 with an absolute one... Fawzi
May 09 2009
prev sibling parent reply =?UTF-8?B?THXCksOtcyBNYXJxdWVz?= <luismarques gmail.com> writes:
 A naive binary chop doesn't work correctly. The fact that there are hundreds
or thousands of times as many representable numbers between 0 and 1, as there
are between 1 and 2, is problematic for divide-and-conquer algorithms. A naive
binary chop would divide the interval [0 .. 2] into [0 .. 1] and [1 .. 2].
Unfortunately, this is not a true binary chop, because the interval [0 .. 1]
contains more than 99% of the representable numbers from the original interval!
How about adding a template to do a binary chop (binary search?) to std.algorithm?
May 12 2009
parent reply Don <nospam nospam.com> writes:
Lu’ís Marques wrote:
 A naive binary chop doesn't work correctly. The fact that there are 
 hundreds or thousands of times as many representable numbers between 0 
 and 1, as there are between 1 and 2, is problematic for 
 divide-and-conquer algorithms. A naive binary chop would divide the 
 interval [0 .. 2] into [0 .. 1] and [1 .. 2]. Unfortunately, this is 
 not a true binary chop, because the interval [0 .. 1] contains more 
 than 99% of the representable numbers from the original interval!
How about adding a template to do a binary chop (binary search?) to std.algorithm?
findRoot() (which needs to be updated to take advantage of compiler improvements) does the job in the most important case. I'm quite proud of it; as far as I know, it's uses a better algorithm than anything else on the planet. <g>
May 13 2009
next sibling parent =?UTF-8?B?THXDrXMgTWFycXVlcw==?= <luismarques gmail.com> writes:
Don wrote:
 findRoot() (which needs to be updated to take advantage of compiler 
 improvements) does the job in the most important case. I'm quite proud 
 of it; as far as I know, it's uses a better algorithm than anything else 
 on the planet. <g>
:-) cool. Great article by the way.
May 14 2009
prev sibling parent reply "Lars T. Kyllingstad" <public kyllingen.NOSPAMnet> writes:
Don wrote:
 Lu’ís Marques wrote:
 A naive binary chop doesn't work correctly. The fact that there are 
 hundreds or thousands of times as many representable numbers between 
 0 and 1, as there are between 1 and 2, is problematic for 
 divide-and-conquer algorithms. A naive binary chop would divide the 
 interval [0 .. 2] into [0 .. 1] and [1 .. 2]. Unfortunately, this is 
 not a true binary chop, because the interval [0 .. 1] contains more 
 than 99% of the representable numbers from the original interval!
How about adding a template to do a binary chop (binary search?) to std.algorithm?
findRoot() (which needs to be updated to take advantage of compiler improvements) does the job in the most important case. I'm quite proud of it; as far as I know, it's uses a better algorithm than anything else on the planet. <g>
Awesome! I hadn't even noticed the std.numeric module before. :) Just a small comment: I think that the type of the function parameter should be templated as well, so that one can pass both functions, delegates and functors to it. Just now I tried to apply findRoot to an actual problem I'm working on, and immediately failed because I tried to pass a free function to it. A trivial thing to work around, but annoying nevertheless. How do you choose/limit what to put in std.numeric? I don't suppose you're going to put all of NETLIB in there... ;) Already, it seems to me that findRoot is somewhat niche for a standard library. -Lars
May 14 2009
next sibling parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
Lars T. Kyllingstad wrote:
 Don wrote:
 Lu’ís Marques wrote:
 A naive binary chop doesn't work correctly. The fact that there are 
 hundreds or thousands of times as many representable numbers between 
 0 and 1, as there are between 1 and 2, is problematic for 
 divide-and-conquer algorithms. A naive binary chop would divide the 
 interval [0 .. 2] into [0 .. 1] and [1 .. 2]. Unfortunately, this is 
 not a true binary chop, because the interval [0 .. 1] contains more 
 than 99% of the representable numbers from the original interval!
How about adding a template to do a binary chop (binary search?) to std.algorithm?
findRoot() (which needs to be updated to take advantage of compiler improvements) does the job in the most important case. I'm quite proud of it; as far as I know, it's uses a better algorithm than anything else on the planet. <g>
Awesome! I hadn't even noticed the std.numeric module before. :) Just a small comment: I think that the type of the function parameter should be templated as well, so that one can pass both functions, delegates and functors to it. Just now I tried to apply findRoot to an actual problem I'm working on, and immediately failed because I tried to pass a free function to it. A trivial thing to work around, but annoying nevertheless. How do you choose/limit what to put in std.numeric? I don't suppose you're going to put all of NETLIB in there... ;) Already, it seems to me that findRoot is somewhat niche for a standard library.
I agree, right now std.numeric is a repository of whatever Don and I have been interested in writing, loosely related to "numerics". But, for example, the string kernels are hardly numeric stuff. (By the way, I think my implementation of gapWeighted* is really the darn best around.) I'd say, let it grow for a while and then ways to decompose it will show themselves. Andrei
May 14 2009
prev sibling parent reply Don <nospam nospam.com> writes:
Lars T. Kyllingstad wrote:
 Don wrote:
 Lu’ís Marques wrote:
 A naive binary chop doesn't work correctly. The fact that there are 
 hundreds or thousands of times as many representable numbers between 
 0 and 1, as there are between 1 and 2, is problematic for 
 divide-and-conquer algorithms. A naive binary chop would divide the 
 interval [0 .. 2] into [0 .. 1] and [1 .. 2]. Unfortunately, this is 
 not a true binary chop, because the interval [0 .. 1] contains more 
 than 99% of the representable numbers from the original interval!
How about adding a template to do a binary chop (binary search?) to std.algorithm?
findRoot() (which needs to be updated to take advantage of compiler improvements) does the job in the most important case. I'm quite proud of it; as far as I know, it's uses a better algorithm than anything else on the planet. <g>
Awesome! I hadn't even noticed the std.numeric module before. :) Just a small comment: I think that the type of the function parameter should be templated as well, so that one can pass both functions, delegates and functors to it. Just now I tried to apply findRoot to an actual problem I'm working on, and immediately failed because I tried to pass a free function to it. A trivial thing to work around, but annoying nevertheless. How do you choose/limit what to put in std.numeric? I don't suppose you're going to put all of NETLIB in there... ;) Already, it seems to me that findRoot is somewhat niche for a standard library.
I think it's one of a very small number of primitive math algorithms. It shows up _very_ frequently. In Tango, it's in tango.math.Bracket, along with a 1-D maximizer. (math.Bracket isn't a name I like very much). There's definitely an issue with Phobos' flat module organisation; it's not clear that Phobos can sensibly grow to be much more extensive than (say) the C standard library or the STL with that kind of structure. I'm convinced that implementation modules, at least, will need to go somewhere else.
May 14 2009
parent reply "Lars T. Kyllingstad" <public kyllingen.NOSPAMnet> writes:
Don wrote:
 Lars T. Kyllingstad wrote:
 Don wrote:
 Lu’ís Marques wrote:
 A naive binary chop doesn't work correctly. The fact that there are 
 hundreds or thousands of times as many representable numbers 
 between 0 and 1, as there are between 1 and 2, is problematic for 
 divide-and-conquer algorithms. A naive binary chop would divide the 
 interval [0 .. 2] into [0 .. 1] and [1 .. 2]. Unfortunately, this 
 is not a true binary chop, because the interval [0 .. 1] contains 
 more than 99% of the representable numbers from the original interval!
How about adding a template to do a binary chop (binary search?) to std.algorithm?
findRoot() (which needs to be updated to take advantage of compiler improvements) does the job in the most important case. I'm quite proud of it; as far as I know, it's uses a better algorithm than anything else on the planet. <g>
Awesome! I hadn't even noticed the std.numeric module before. :) Just a small comment: I think that the type of the function parameter should be templated as well, so that one can pass both functions, delegates and functors to it. Just now I tried to apply findRoot to an actual problem I'm working on, and immediately failed because I tried to pass a free function to it. A trivial thing to work around, but annoying nevertheless. How do you choose/limit what to put in std.numeric? I don't suppose you're going to put all of NETLIB in there... ;) Already, it seems to me that findRoot is somewhat niche for a standard library.
I think it's one of a very small number of primitive math algorithms. It shows up _very_ frequently. In Tango, it's in tango.math.Bracket, along with a 1-D maximizer. (math.Bracket isn't a name I like very much). There's definitely an issue with Phobos' flat module organisation; it's not clear that Phobos can sensibly grow to be much more extensive than (say) the C standard library or the STL with that kind of structure. I'm convinced that implementation modules, at least, will need to go somewhere else.
In my numerics library (which I've, in all modesty, named SciD) I'm converging on a package structure I'm fairly pleased with: scid.core: Internal modules, such as scid.core.traits, scid.core.tests, scid.core.exception, etc. scid.ports: Here I put ports of various packages from NETLIB and other sources. Examples of subpackages are scid.minpack, scid.quadpack, etc. These have, for the most part, rather nasty APIs, and should normally not be used directly. scid: In the root package I've placed the most ubiquitous modules, i.e. scid.vector, scid.transformation, etc. scid.calculus scid.calculus.differentiation scid.calculus.integration ... scid.linalg ... scid.nonlinear ... Specific problem areas get their own subpackages. At the moment these mostly just contain nicer interfaces to the routines in scid.ports -- "drivers", if you will. I've tried a lot of setups, and this is the one I've found most flexible. The hierarchy isn't as shallow (and inextensible) as Phobos', nor is it as deep (and unwieldly) as Tango's. I can definitely see how having a flat module hierarchy may prohibit the expansion of Phobos. I'm not at all convinced this is a bad thing, though. I really liked your FP article, by the way. It made things a lot clearer for me. :) -Lars
May 14 2009
next sibling parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
Lars T. Kyllingstad wrote:
 I can definitely see how having a flat module hierarchy may prohibit the 
 expansion of Phobos. I'm not at all convinced this is a bad thing, though.
 
 I really liked your FP article, by the way. It made things a lot clearer 
  for me. :)
Agreed on both counts! Andrei
May 14 2009
prev sibling next sibling parent reply Don <nospam nospam.com> writes:
Lars T. Kyllingstad wrote:
 Don wrote:
 Lars T. Kyllingstad wrote:
 Don wrote:
 Lu’ís Marques wrote:
 A naive binary chop doesn't work correctly. The fact that there 
 are hundreds or thousands of times as many representable numbers 
 between 0 and 1, as there are between 1 and 2, is problematic for 
 divide-and-conquer algorithms. A naive binary chop would divide 
 the interval [0 .. 2] into [0 .. 1] and [1 .. 2]. Unfortunately, 
 this is not a true binary chop, because the interval [0 .. 1] 
 contains more than 99% of the representable numbers from the 
 original interval!
How about adding a template to do a binary chop (binary search?) to std.algorithm?
findRoot() (which needs to be updated to take advantage of compiler improvements) does the job in the most important case. I'm quite proud of it; as far as I know, it's uses a better algorithm than anything else on the planet. <g>
Awesome! I hadn't even noticed the std.numeric module before. :) Just a small comment: I think that the type of the function parameter should be templated as well, so that one can pass both functions, delegates and functors to it. Just now I tried to apply findRoot to an actual problem I'm working on, and immediately failed because I tried to pass a free function to it. A trivial thing to work around, but annoying nevertheless. How do you choose/limit what to put in std.numeric? I don't suppose you're going to put all of NETLIB in there... ;) Already, it seems to me that findRoot is somewhat niche for a standard library.
I think it's one of a very small number of primitive math algorithms. It shows up _very_ frequently. In Tango, it's in tango.math.Bracket, along with a 1-D maximizer. (math.Bracket isn't a name I like very much). There's definitely an issue with Phobos' flat module organisation; it's not clear that Phobos can sensibly grow to be much more extensive than (say) the C standard library or the STL with that kind of structure. I'm convinced that implementation modules, at least, will need to go somewhere else.
In my numerics library (which I've, in all modesty, named SciD) I'm converging on a package structure I'm fairly pleased with: scid.core: Internal modules, such as scid.core.traits, scid.core.tests, scid.core.exception, etc. scid.ports: Here I put ports of various packages from NETLIB and other sources. Examples of subpackages are scid.minpack, scid.quadpack, etc. These have, for the most part, rather nasty APIs, and should normally not be used directly. scid: In the root package I've placed the most ubiquitous modules, i.e. scid.vector, scid.transformation, etc. scid.calculus scid.calculus.differentiation scid.calculus.integration ... scid.linalg ... scid.nonlinear ... Specific problem areas get their own subpackages. At the moment these mostly just contain nicer interfaces to the routines in scid.ports -- "drivers", if you will. I've tried a lot of setups, and this is the one I've found most flexible. The hierarchy isn't as shallow (and inextensible) as Phobos', nor is it as deep (and unwieldly) as Tango's.
That sounds about right. But you might even consider dropping it down one level. For example, everyone knows that differentiation is calculus.differentiation; I think you only need a level when the module name is ambiguous on its own. (std.integration might be about Product Integration or something; but math.integration is unambiguous). The tango.math heirarchy only has a single level. module tango.math.Module; (and internal stuff is in tango.math.internal.Implementation) Interestingly, when people have complained about Tango's heirarchy, they've always excluded tango.math from the complaints. I suspect that means that two levels is about right (and btw it's interesting to note that biologists use genus.species -- two levels again). I'm no expert on this, though.
 I can definitely see how having a flat module hierarchy may prohibit the 
 expansion of Phobos. I'm not at all convinced this is a bad thing, though.
The scope of Phobos really needs to be decided sometime. I think it either needs to expand significantly, or else some existing code needs to be removed (is std.xml actually fundamental?) If Phobos were to be small, then Tango would make a lot of sense as a standard collection of add-on functionality. That's not the only possibility, but it is something that needs to be decided.
 I really liked your FP article, by the way. It made things a lot clearer 
  for me. :)
Thanks! It made a lot of things clearer for me, too <g>. Don
May 14 2009
parent "Lars T. Kyllingstad" <public kyllingen.NOSPAMnet> writes:
Don wrote:
 Lars T. Kyllingstad wrote:
 Don wrote:
 Lars T. Kyllingstad wrote:
 Don wrote:
 Lu’ís Marques wrote:
 A naive binary chop doesn't work correctly. The fact that there 
 are hundreds or thousands of times as many representable numbers 
 between 0 and 1, as there are between 1 and 2, is problematic for 
 divide-and-conquer algorithms. A naive binary chop would divide 
 the interval [0 .. 2] into [0 .. 1] and [1 .. 2]. Unfortunately, 
 this is not a true binary chop, because the interval [0 .. 1] 
 contains more than 99% of the representable numbers from the 
 original interval!
How about adding a template to do a binary chop (binary search?) to std.algorithm?
findRoot() (which needs to be updated to take advantage of compiler improvements) does the job in the most important case. I'm quite proud of it; as far as I know, it's uses a better algorithm than anything else on the planet. <g>
Awesome! I hadn't even noticed the std.numeric module before. :) Just a small comment: I think that the type of the function parameter should be templated as well, so that one can pass both functions, delegates and functors to it. Just now I tried to apply findRoot to an actual problem I'm working on, and immediately failed because I tried to pass a free function to it. A trivial thing to work around, but annoying nevertheless. How do you choose/limit what to put in std.numeric? I don't suppose you're going to put all of NETLIB in there... ;) Already, it seems to me that findRoot is somewhat niche for a standard library.
I think it's one of a very small number of primitive math algorithms. It shows up _very_ frequently. In Tango, it's in tango.math.Bracket, along with a 1-D maximizer. (math.Bracket isn't a name I like very much). There's definitely an issue with Phobos' flat module organisation; it's not clear that Phobos can sensibly grow to be much more extensive than (say) the C standard library or the STL with that kind of structure. I'm convinced that implementation modules, at least, will need to go somewhere else.
In my numerics library (which I've, in all modesty, named SciD) I'm converging on a package structure I'm fairly pleased with: scid.core: Internal modules, such as scid.core.traits, scid.core.tests, scid.core.exception, etc. scid.ports: Here I put ports of various packages from NETLIB and other sources. Examples of subpackages are scid.minpack, scid.quadpack, etc. These have, for the most part, rather nasty APIs, and should normally not be used directly. scid: In the root package I've placed the most ubiquitous modules, i.e. scid.vector, scid.transformation, etc. scid.calculus scid.calculus.differentiation scid.calculus.integration ... scid.linalg ... scid.nonlinear ... Specific problem areas get their own subpackages. At the moment these mostly just contain nicer interfaces to the routines in scid.ports -- "drivers", if you will. I've tried a lot of setups, and this is the one I've found most flexible. The hierarchy isn't as shallow (and inextensible) as Phobos', nor is it as deep (and unwieldly) as Tango's.
That sounds about right. But you might even consider dropping it down one level. For example, everyone knows that differentiation is calculus.differentiation; I think you only need a level when the module name is ambiguous on its own. (std.integration might be about Product Integration or something; but math.integration is unambiguous).
Thanks, I'll keep that in mind. At one point I had to use several levels because I kept interfaces and implementations in the same modules, and the source files quickly became very large. Now that I've put the implementations in scid.ports, using one level is a lot more manageable. Whether I choose to use one or two levels, the idea is to mimic the structure of the GAMS classification tree. -Lars
May 14 2009
prev sibling next sibling parent reply dsimcha <dsimcha yahoo.com> writes:
== Quote from Lars T. Kyllingstad (public kyllingen.NOSPAMnet)'s article
 In my numerics library (which I've, in all modesty, named SciD) I'm
 converging on a package structure I'm fairly pleased with:
 scid.core:
      Internal modules, such as scid.core.traits, scid.core.tests,
      scid.core.exception, etc.
 scid.ports:
      Here I put ports of various packages from NETLIB and other
      sources. Examples of subpackages are scid.minpack,
      scid.quadpack, etc. These have, for the most part, rather
      nasty APIs, and should normally not be used directly.
 scid:
      In the root package I've placed the most ubiquitous modules,
      i.e. scid.vector, scid.transformation, etc.
 scid.calculus
      scid.calculus.differentiation
      scid.calculus.integration
      ...
 scid.linalg
      ...
 scid.nonlinear
      ...
      Specific problem areas get their own subpackages. At the moment
      these mostly just contain nicer interfaces to the routines in
      scid.ports -- "drivers", if you will.
Wait a minute, you have a numerics library for D that does all this stuff? I'd like to hear more about this. If/when it's in a usable state, please put it up on dsource and post it to http://prowiki.org/wiki4d/wiki.cgi?ScientificLibraries.
May 14 2009
parent reply "Lars T. Kyllingstad" <public kyllingen.NOSPAMnet> writes:
dsimcha wrote:
 == Quote from Lars T. Kyllingstad (public kyllingen.NOSPAMnet)'s article
 In my numerics library (which I've, in all modesty, named SciD) I'm
 converging on a package structure I'm fairly pleased with:
 scid.core:
      Internal modules, such as scid.core.traits, scid.core.tests,
      scid.core.exception, etc.
 scid.ports:
      Here I put ports of various packages from NETLIB and other
      sources. Examples of subpackages are scid.minpack,
      scid.quadpack, etc. These have, for the most part, rather
      nasty APIs, and should normally not be used directly.
 scid:
      In the root package I've placed the most ubiquitous modules,
      i.e. scid.vector, scid.transformation, etc.
 scid.calculus
      scid.calculus.differentiation
      scid.calculus.integration
      ...
 scid.linalg
      ...
 scid.nonlinear
      ...
      Specific problem areas get their own subpackages. At the moment
      these mostly just contain nicer interfaces to the routines in
      scid.ports -- "drivers", if you will.
Wait a minute, you have a numerics library for D that does all this stuff? I'd like to hear more about this. If/when it's in a usable state, please put it up on dsource and post it to http://prowiki.org/wiki4d/wiki.cgi?ScientificLibraries.
It's in a usable, but very incomplete state. I fully intend to publish it on dsource at some point, but there are a couple of reasons why I haven't done it yet: 1. I don't want to burden dsource with yet another dead project with some half-baked modules in it. I haven't got a lot of time to work on it, and mostly I just add things as I need them in my work. 2. I have to figure out some licensing issues. Some algorithms are clearly public domain, while some things -- like code I've ripped off Numerical Recipes, for instance -- is more questionable. (Although the NR guys do quite a lot of off-ripping themselves. ;) 3. The API is still very much in a state of flux, and I need to decide how I want it. Here, I feel I'm making some progress now. After a long period of doing analytical calculations by hand, I'm now back on the numerics bandwagon and have started actually using my library again. The best way to judge whether an API is usable is to actually use it. ;) If you're interested, this is what I have: Integration: - Five of the QUADPACK routines (qk, qng, qag, qags, qagi) - Mori's Double Exponential algorithm Differentiation: - Forward/backward/symmetric difference - Ridder's extrapolation method - A home-brewed templated function for doing higher-order derivatives by symmetric difference - Jacobian matrix by forward difference (from MINPACK) Nonlinear equation solvers: - bisection, secant, false position, and Ridders' method Nonlinear system solvers: - MINPACK's variant of the Powell Hybrid algorithm - The Newton-Raphson algorithm I'm hoping someone will create a linear algebra library for D2 soon. I know very little about that stuff. At the very least, I think Phobos should have some basic vector/matrix functionality. I've looked at Don's BLADE library; it looks pretty awesome. Here's to hoping he finds the time and inspiration to continue work on it again. :) -Lars
May 14 2009
parent reply Georg Wrede <georg.wrede iki.fi> writes:
Lars T. Kyllingstad wrote:
 2. I have to figure out some licensing issues. Some algorithms are 
 clearly public domain, while some things -- like code I've ripped off 
 Numerical Recipes, for instance -- is more questionable. (Although the 
 NR guys do quite a lot of off-ripping themselves. ;)
If you're talking about any one of the books that come up when entering "Numerical Recipes" in the Amazon search box, I'd say that those recipes are freely usable. That's why they're in the books. Checking, of course may be good, but if anyone publishes recipes in a book and then sues people for actually using them, I'd sue *them* for entrapment.
May 15 2009
next sibling parent reply Bill Baxter <wbaxter gmail.com> writes:
On Fri, May 15, 2009 at 12:22 PM, Georg Wrede <georg.wrede iki.fi> wrote:
 Lars T. Kyllingstad wrote:
 2. I have to figure out some licensing issues. Some algorithms are clearly
 public domain, while some things -- like code I've ripped off Numerical
 Recipes, for instance -- is more questionable. (Although the NR guys do
 quite a lot of off-ripping themselves. ;)
If you're talking about any one of the books that come up when entering "Numerical Recipes" in the Amazon search box, I'd say that those recipes are freely usable. That's why they're in the books. Checking, of course may be good, but if anyone publishes recipes in a book and then sues people for actually using them, I'd sue *them* for entrapment.
Well you'd better call your lawyer then. :-) The usage terms on the classic Numerical Recipes are terrible. Basically you're only allowed to use their code if you purchased a copy of the book. Which basically means if you use their code, then you are not allowed to share the NR portion of your code with anyone who does not own the book. Their code is not so pretty anyway, so much better to just read the ideas and implement it yourself, thus avoiding any potential legal hassle. SciPy has recently done a sweep through their code purging anything that looks like it was derived from NR code for this reason. They also went through and made it clear in all comments which refer to NR for explanations of algorithms that they did NOT use NR code as a basis for the code in SciPy. --bb
May 15 2009
parent reply Georg Wrede <georg.wrede iki.fi> writes:
Bill Baxter wrote:
 On Fri, May 15, 2009 at 12:22 PM, Georg Wrede <georg.wrede iki.fi> wrote:
 Lars T. Kyllingstad wrote:
 2. I have to figure out some licensing issues. Some algorithms are clearly
 public domain, while some things -- like code I've ripped off Numerical
 Recipes, for instance -- is more questionable. (Although the NR guys do
 quite a lot of off-ripping themselves. ;)
If you're talking about any one of the books that come up when entering "Numerical Recipes" in the Amazon search box, I'd say that those recipes are freely usable. That's why they're in the books. Checking, of course may be good, but if anyone publishes recipes in a book and then sues people for actually using them, I'd sue *them* for entrapment.
Well you'd better call your lawyer then. :-) The usage terms on the classic Numerical Recipes are terrible. Basically you're only allowed to use their code if you purchased a copy of the book. Which basically means if you use their code, then you are not allowed to share the NR portion of your code with anyone who does not own the book.
Damn. It's a shame that I currently don't have a company of, say, 20 coders. Then I'd have enough slack in my turnover to take on issues like this. Here in Europe, some of those things aren't all too favorably looked upon. Just last week I read that Intel had got a fine of millions of dollars, just because they had twisted the arm of some wholeseller so that they entirely excluded AMD.
 Their code is not so pretty anyway, so much better to just read the
 ideas and implement it yourself, thus avoiding any potential legal
 hassle.
Yes, if the algorithm is published in a book, then it's a lot easier for us to implement it, instead of "translating" /their/ C[++] implementation.
 SciPy has recently done a sweep through their code purging anything
 that looks like it was derived from NR code for this reason.  They
 also went through and made it clear in all comments which refer to NR
 for explanations of algorithms that they did NOT use NR code as a
 basis for the code in SciPy.
The *first* half of this century belongs to the RIAA, and their friends. And similarily, like the Soviet Union thought socialism /everywhere/ is the answer, seems like some other countries think free market /everywhere/, unhindered, is the answer. For example, TV was originally thought of as a superior medium to educate the masses, to bring equality to citizens by enabling everyone access to the same education. Today, one watches Oprah, Dr Phil, etc. and sees 3 minutes of programming, 6 minutes of commercials, and the same all over again. Watching those shows overseas, makes one cry of compassion. Here we see the shows at least a half hour at a time, interspersed (sometimes) with brief flashes, saying "put commercial here". And we have jingles between programming and commercials. During the *second* half of this century, most countries and continents have adopted a more pragmatical way. (Just as in programming languages, full OOP sucks, full Purely Functional sucks, etc.) The entities will have adopted a pragmatic mix of state controlled, free market, branch cooperated, and third sector, economic systems. For example, deciding what should be showed to kids before 9am, is not a choice left to "who makes programming that the average 2-second channel hopper sticks to", or who is audacious enough to have 95% of an hour commercials and only 5% programming "because the audience seems to cope". That decision has to be with the FCC, or some other authority, not guided by quartal revenue. Just yesterday I watched a documentary from China. Seems like the Central Government (or whatever they're called) have adopted a way of thinking about economical issues, that kind-of strides the socialist and the market economy. Most things are according to market economy, but some things are tightly controlled. (Yes, I know, they censor the Internet heavily, but the commentators and the interviewees (some of which were economics students at regular Chinese, Shanhai, or Hong Kong universities) seemed to think that some censorship is a small price to pay for economic stability, long-term predictability, and reason in administration. Which they (according to them) couldn't have with a revolution or a shift of power.) We don't need to copy NR. All we need is to glipse at their algorithms. And anyway, D is so much more powerful that those algorithms would be, implemented in another way, anyhow.
May 15 2009
parent reply Bill Baxter <wbaxter gmail.com> writes:
On Fri, May 15, 2009 at 5:15 PM, Georg Wrede <georg.wrede iki.fi> wrote:

 We don't need to copy NR. All we need is to glipse at their algorithms. And
 anyway, D is so much more powerful that those algorithms would be,
 implemented in another way, anyhow.
THe NR in C code is basically a translated version of the NR in Fortran code. Right down to using 1-based indexing. THe NR in C++ code is the C code with a few token classes thrown in for vectors. Still using 1-based indexing. It's not something you'd really want to emulate, especially not in D. Unless you just happen to love Fortran-style code. :-) --bb
May 15 2009
parent neob <neobstojec gmail.com> writes:
 THe NR in C++ code is the C code with a few token classes thrown in
 for vectors.  Still using 1-based indexing.
You must be talking about NR 2 C++ code. NR 3 code uses 0-based indexing (and it allows you to use STL vectors). It's also a lot more object oriented than NR 2 and it uses templates instead of function pointer parameters. Still, it's not very usable as a library and it's not even meant to be one. It's basically a set of algorithms, so free functions would be better than classes, IMHO.
May 15 2009
prev sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
Georg Wrede wrote:
 Lars T. Kyllingstad wrote:
 2. I have to figure out some licensing issues. Some algorithms are 
 clearly public domain, while some things -- like code I've ripped off 
 Numerical Recipes, for instance -- is more questionable. (Although the 
 NR guys do quite a lot of off-ripping themselves. ;)
If you're talking about any one of the books that come up when entering "Numerical Recipes" in the Amazon search box, I'd say that those recipes are freely usable. That's why they're in the books. Checking, of course may be good, but if anyone publishes recipes in a book and then sues people for actually using them, I'd sue *them* for entrapment.
Actually I seem to remember that "Numerical recipes in C" was widely criticized for having incredibly strong restrictions on the published code. Andrei
May 15 2009
next sibling parent Georg Wrede <georg.wrede iki.fi> writes:
Andrei Alexandrescu wrote:
 Georg Wrede wrote:
 Lars T. Kyllingstad wrote:
 2. I have to figure out some licensing issues. Some algorithms are 
 clearly public domain, while some things -- like code I've ripped off 
 Numerical Recipes, for instance -- is more questionable. (Although 
 the NR guys do quite a lot of off-ripping themselves. ;)
If you're talking about any one of the books that come up when entering "Numerical Recipes" in the Amazon search box, I'd say that those recipes are freely usable. That's why they're in the books. Checking, of course may be good, but if anyone publishes recipes in a book and then sues people for actually using them, I'd sue *them* for entrapment.
Actually I seem to remember that "Numerical recipes in C" was widely criticized for having incredibly strong restrictions on the published code.
Exactly. So, what's the point? Here's a bone, widely available on the sidewalk!!! But my god, just try to chew on it, and I'll sue your [butt]. And that's one more reason to lose respect for a certain judicial system. The mere hint of such practices, really should make the law makers take a serious look at themselves. (But then again, to even become a law maker in such a system, you'd have gone throug such a chewing that the last bit of decency, sense of right, fairness, and a few other characteristics, simply have to have been erased from your mind. Heck, lawyers in even honest countries aren't respected for righteousness, fairness, integrity, or respect.) I've seen software houses in America publish their library code. And that code contained passages like "this is the unpublished source code [...] copyright witheld [...] all rights reserved". Publishing freaking "Unpublished" can only happen in the New World.
May 15 2009
prev sibling next sibling parent =?ISO-8859-1?Q?=22J=E9r=F4me_M=2E_Berger=22?= <jeberger free.fr> writes:
Andrei Alexandrescu wrote:
 Actually I seem to remember that "Numerical recipes in C" was widely=20
 criticized for having incredibly strong restrictions on the published c=
ode.
=20
Not counting the criticism about its algorithms being plain=20 wrong... (I'm not able to judge myself). http://www.uwyo.edu/buerkle/misc/wnotnr.html http://amath.colorado.edu/computing/Fortran/numrec.html Jerome --=20 mailto:jeberger free.fr http://jeberger.free.fr Jabber: jeberger jabber.fr
May 16 2009
prev sibling parent reply Walter Bright <newshound1 digitalmars.com> writes:
Andrei Alexandrescu wrote:
 Actually I seem to remember that "Numerical recipes in C" was widely 
 criticized for having incredibly strong restrictions on the published code.
The license is, from my copy of the 1987 edition: "Although this book and its programs are copyrighted, we specifically authorize you, the reader of this book, to make one machine-readable copy of each program for your own use. [...] Distribution of machine-readable programs (either as copied by you or as purchased) to any other person is not authorized."
May 16 2009
next sibling parent Georg Wrede <georg.wrede iki.fi> writes:
Walter Bright wrote:
 Andrei Alexandrescu wrote:
 Actually I seem to remember that "Numerical recipes in C" was widely 
 criticized for having incredibly strong restrictions on the published 
 code.
The license is, from my copy of the 1987 edition: "Although this book and its programs are copyrighted, we specifically authorize you, the reader of this book, to make one machine-readable copy of each program for your own use. [...] Distribution of machine-readable programs (either as copied by you or as purchased) to any other person is not authorized."
(Fumbling below the desk, in search of the shotgun...) Has anyone found a 20 years newer copyright on this?
May 16 2009
prev sibling next sibling parent reply Sean Kelly <sean invisibleduck.org> writes:
Walter Bright wrote:
 Andrei Alexandrescu wrote:
 Actually I seem to remember that "Numerical recipes in C" was widely 
 criticized for having incredibly strong restrictions on the published 
 code.
The license is, from my copy of the 1987 edition: "Although this book and its programs are copyrighted, we specifically authorize you, the reader of this book, to make one machine-readable copy of each program for your own use. [...] Distribution of machine-readable programs (either as copied by you or as purchased) to any other person is not authorized."
I'll have to dig my copy out and see if it's any better. Really, if the code can't even legally be used internally for an application developed by a team of programmers then the book is nearly worthless.
May 16 2009
parent reply "Lars T. Kyllingstad" <public kyllingen.NOSPAMnet> writes:
Sean Kelly wrote:
 Walter Bright wrote:
 Andrei Alexandrescu wrote:
 Actually I seem to remember that "Numerical recipes in C" was widely 
 criticized for having incredibly strong restrictions on the published 
 code.
The license is, from my copy of the 1987 edition: "Although this book and its programs are copyrighted, we specifically authorize you, the reader of this book, to make one machine-readable copy of each program for your own use. [...] Distribution of machine-readable programs (either as copied by you or as purchased) to any other person is not authorized."
I'll have to dig my copy out and see if it's any better. Really, if the code can't even legally be used internally for an application developed by a team of programmers then the book is nearly worthless.
I wouldn't go so far as to call it worthless. It may not be a repository of freely usable algorithms, but it is a nice textbook that covers a wide range of topics and methods. A lot of the algorithms in NR are just their versions of classic (and public domain) algorithms found freely available on NETLIB. So even if one cannot write them directly from the book, there are other ways to get at them. What one can't always get from NETLIB are explanations of how the routines work, and why they're designed the way they are, and that's where NR comes in handy. -Lars
May 16 2009
parent reply Don <nospam nospam.com> writes:
Lars T. Kyllingstad wrote:
 Sean Kelly wrote:
 Walter Bright wrote:
 Andrei Alexandrescu wrote:
 Actually I seem to remember that "Numerical recipes in C" was widely 
 criticized for having incredibly strong restrictions on the 
 published code.
The license is, from my copy of the 1987 edition: "Although this book and its programs are copyrighted, we specifically authorize you, the reader of this book, to make one machine-readable copy of each program for your own use. [...] Distribution of machine-readable programs (either as copied by you or as purchased) to any other person is not authorized."
I'll have to dig my copy out and see if it's any better. Really, if the code can't even legally be used internally for an application developed by a team of programmers then the book is nearly worthless.
I wouldn't go so far as to call it worthless. It may not be a repository of freely usable algorithms, but it is a nice textbook that covers a wide range of topics and methods. A lot of the algorithms in NR are just their versions of classic (and public domain) algorithms found freely available on NETLIB.
Yes, they basically grabbed algorithms from NETLIB and added their own bugs to them. So even if
 one cannot write them directly from the book, there are other ways to 
 get at them. What one can't always get from NETLIB are explanations of 
 how the routines work, and why they're designed the way they are, and 
 that's where NR comes in handy.
Even beyond that, the really big contribution of NR was in publicising some great algorithms.
May 16 2009
parent reply "Lars T. Kyllingstad" <public kyllingen.NOSPAMnet> writes:
Don wrote:
 Lars T. Kyllingstad wrote:
 Sean Kelly wrote:
 Walter Bright wrote:
 Andrei Alexandrescu wrote:
 Actually I seem to remember that "Numerical recipes in C" was 
 widely criticized for having incredibly strong restrictions on the 
 published code.
The license is, from my copy of the 1987 edition: "Although this book and its programs are copyrighted, we specifically authorize you, the reader of this book, to make one machine-readable copy of each program for your own use. [...] Distribution of machine-readable programs (either as copied by you or as purchased) to any other person is not authorized."
I'll have to dig my copy out and see if it's any better. Really, if the code can't even legally be used internally for an application developed by a team of programmers then the book is nearly worthless.
I wouldn't go so far as to call it worthless. It may not be a repository of freely usable algorithms, but it is a nice textbook that covers a wide range of topics and methods. A lot of the algorithms in NR are just their versions of classic (and public domain) algorithms found freely available on NETLIB.
Yes, they basically grabbed algorithms from NETLIB and added their own bugs to them.
So really, the licence should say: "Although this book and the bugs in its programs are copyrighted, we specifically authorize you, the reader of this book, to make one machine-readable copy of each bug for your own use. [...]" :) -Lars
May 17 2009
parent Fawzi Mohamed <fmohamed mac.com> writes:
On 2009-05-17 12:02:31 +0200, "Lars T. Kyllingstad" 
<public kyllingen.NOSPAMnet> said:

 Don wrote:
 Lars T. Kyllingstad wrote:
 Sean Kelly wrote:
 Walter Bright wrote:
 Andrei Alexandrescu wrote:
 Actually I seem to remember that "Numerical recipes in C" was widely 
 criticized for having incredibly strong restrictions on the published 
 code.
The license is, from my copy of the 1987 edition: "Although this book and its programs are copyrighted, we specifically authorize you, the reader of this book, to make one machine-readable copy of each program for your own use. [...] Distribution of machine-readable programs (either as copied by you or as purchased) to any other person is not authorized."
I'll have to dig my copy out and see if it's any better. Really, if the code can't even legally be used internally for an application developed by a team of programmers then the book is nearly worthless.
I wouldn't go so far as to call it worthless. It may not be a repository of freely usable algorithms, but it is a nice textbook that covers a wide range of topics and methods. A lot of the algorithms in NR are just their versions of classic (and public domain) algorithms found freely available on NETLIB.
Yes, they basically grabbed algorithms from NETLIB and added their own bugs to them.
So really, the licence should say: "Although this book and the bugs in its programs are copyrighted, we specifically authorize you, the reader of this book, to make one machine-readable copy of each bug for your own use. [...]" :) -Lars
The extremely restrictive license of numerical recipes means that one should *always* try to avoid using their version. The book does a reasonable job at explaining some good algorithms (but often not the best/newest) but normally one can find good alternatives, or take original NETLIB algorithms. The unresonable license of NR was one of the main reasons for GSL (which has its own set of problems). Fawzi
May 17 2009
prev sibling parent reply Georg Wrede <georg.wrede iki.fi> writes:
Walter Bright wrote:
 Andrei Alexandrescu wrote:
 Actually I seem to remember that "Numerical recipes in C" was widely 
 criticized for having incredibly strong restrictions on the published 
 code.
The license is, from my copy of the 1987 edition: "Although this book and its programs are copyrighted, we specifically authorize you, the reader of this book, to make one machine-readable copy of each program for your own use. [...] Distribution of
So, if I use one of their programs in /two/ applications (of course, for my private use, only), then I have to buy two copies of the book. Actually, they never state that I get additional single licences by buying further copies, so for two programs and two applications, I still can't be sure they'd not sue my pants off of me. Gee, real friends. Bobdamn, where the h*ll is the f** shotgun, already!!!
 machine-readable programs (either as copied by you or as purchased) to 
 any other person is not authorized."
Just watched the Eurovision Song Contest. (That's the Biggest Annual TV Event in Europe, Israel, North Africa, half of Asia, and Australia and New Zealand.) Bands like ABBA of Sweden, were explicitly assembled for the contest, and many winners enjoy a huge world-wide success later. Turns out Eurovision (a subsidiary of the European Broadcast[ers] Union) "grid-casts" the show on the net. Now, the client to view with, has an "interesting" EULA: www.octoshape.com/play/EULA.pdf A ripe quote: "You may not collect any information about communication in the network of computers that are operating the Software or about the other users of the Software by monitoring, interdicting or intercepting any process of the Software. Octoshape recognizes that firewalls and anti-virus applications can collect such information, in which case you not are allowed to use or distribute such information." There is no phrasing fit for a NG that I'd care to formulate about this EULA, or its authors. No normal viewer of the show is likely to ever even notice that such a licence exists, or bother to read it, had they even stumbled upon it. And the above quote is conveniently embedded, instead of being prominently placed in the PDF. (I regularly don't read EULAS or other such stuff. I've slowly learned that one can perceive that one's understood the text, but when push comes to shove, those b*stards have always an innocent looking phrase somewhere that you'd never have guessed is the one that ties the noose around your neck. So why not just as well skip the crap, and simply rely on good luck (or the plurality of similar victims), to go on with your life.) "The EBU also encourages active collaboration between its Members on the basis that they can freely share their knowledge and experience, thus achieving considerably more than individual Members could achieve by themselves. Much of this collaboration is achieved through Project Groups which study specific technical issues of common interest: for example, EBU Members have long been preparing for the revision of the 1961 Stockholm Plan. "The EBU places great emphasis on the use of open standards. Widespread use of open standards (such as MPEG-2, DAB, DVB, etc.) ensures interoperability between products from different vendors, as well as facilitating the exchange of programme material between EBU Members and promoting "horizontal markets" for the benefit of all consumers. "EBU Members and the EBU Technical Department have long played an important role in the development of many systems used in radio and television broadcasting[...]" Yeah. So, how come I must vow to shut my eyes when studying the firewall logs???
May 16 2009
parent reply grauzone <none example.net> writes:
 Just watched the Eurovision Song Contest. (That's the Biggest Annual TV 
 Event in Europe, Israel, North Africa, half of Asia, and Australia and 
 New Zealand.) Bands like ABBA of Sweden, were explicitly assembled for 
 the contest, and many winners enjoy a huge world-wide success later.
Now now, who would care about that contest? It's as crappy as most other things on TV. That was that thing where viewers can vote for songs, and the votes are listed by-country, right? (Yay for using torrents, xdcc and mplayer as "TV set" instead!)
 Turns out Eurovision (a subsidiary of the European Broadcast[ers] Union) 
 "grid-casts" the show on the net. Now, the client to view with, has an 
 "interesting" EULA:
And I guess there's no way for playback with open source software? And I tried to find out what they're using. All I found was a Flash applet. Everything that uses Flash is ignored by me by default.
May 17 2009
parent Georg Wrede <georg.wrede iki.fi> writes:
grauzone wrote:
 Just watched the Eurovision Song Contest. (That's the Biggest Annual 
 TV Event in Europe, Israel, North Africa, half of Asia, and Australia 
 and New Zealand.) Bands like ABBA of Sweden, were explicitly assembled 
 for the contest, and many winners enjoy a huge world-wide success later.
Now now, who would care about that contest? It's as crappy as most other things on TV. That was that thing where viewers can vote for songs, and the votes are listed by-country, right?
Yes, that's the one. As interesting as when some armored body builders run all over a lawn, kicking and throwing an oblong ball, and each other, while an entire nation stops, once a year. One only needs to see it because everybody else saw it. Crap or not.
 (Yay for using torrents, xdcc and mplayer as "TV set" instead!)
The day will come, pilgrim. The day will come.
 Turns out Eurovision (a subsidiary of the European Broadcast[ers] 
 Union) "grid-casts" the show on the net. Now, the client to view with, 
 has an "interesting" EULA:
And I guess there's no way for playback with open source software? And I tried to find out what they're using. All I found was a Flash applet. Everything that uses Flash is ignored by me by default.
I'm not going to get even started on that one. Grrrrrr.
May 17 2009
prev sibling parent Georg Wrede <georg.wrede iki.fi> writes:
Lars T. Kyllingstad wrote:
     ...
     Specific problem areas get their own subpackages. At the moment
     these mostly just contain nicer interfaces to the routines in
     scid.ports -- "drivers", if you will.
 
 I've tried a lot of setups, and this is the one I've found most 
 flexible.
That's what I call user friendly. Of course, that way several (small things) things may end up being in more than one package, but that should be okay.
May 15 2009