www.digitalmars.com         C & C++   DMDScript  

D - numerical code heaviness

reply nicO <nicolas.boulay ifrance.com> writes:
REcently i have read a flame war on a forum after a news about numerical
recepies.

Most of the people sais that C sux for numerical computing, so happy
Fortran 90. Even Java is declared too heavy because of it's lack of
overloaded operator.

So what can do D ? Does D need to have inifinite precision integer type
(using 'inf' for infinite in the range) and a matrix type ? Or should we
leave numerical calculation to Fortran ?

nicO
Sep 04 2001
parent reply "Walter" <walter digitalmars.com> writes:
"nicO" <nicolas.boulay ifrance.com> wrote in message
news:3B957C4F.52C4D66A ifrance.com...
 REcently i have read a flame war on a forum after a news about numerical
 recepies.

 Most of the people sais that C sux for numerical computing, so happy
 Fortran 90. Even Java is declared too heavy because of it's lack of
 overloaded operator.

 So what can do D ? Does D need to have inifinite precision integer type
 (using 'inf' for infinite in the range) and a matrix type ? Or should we
 leave numerical calculation to Fortran ?

 nicO
D's support for floating point will be better than C99's. For example, real numbers are by default initialized to NAN's, instead of to 0 or some random bit pattern. D doesn't have an infinite precision builtin type or a builtin matrix type.
Sep 04 2001
parent reply "Sean L. Palmer" <spalmer iname.com> writes:
"Walter" <walter digitalmars.com> wrote in message
news:9n3s54$oqj$1 digitaldaemon.com...
 D's support for floating point will be better than C99's. For example,
real
 numbers are by default initialized to NAN's, instead of to 0 or some
random
 bit pattern.
I'm curious what the rationale for this decision was. Seems to make more sense to default initialize floats to zero, not NAN... if for no other reason than that the ints are all default-initialized to zero. How would you go about specifying a NAN float literal, anyway?
 D doesn't have an infinite precision builtin type or a builtin matrix
type. I think the lack of either a matrix type or a way to make one ourselves (operator overloading and member functions on structs) will make this language unsuitable for the computer graphics field. I don't see a big need for infinite precision math, I'd rather expose the machine's hardware capabilities. Sean
Oct 28 2001
next sibling parent reply Russell Borogove <kaleja estarcion.com> writes:
"Sean L. Palmer" wrote:
 How would you go about specifying a NAN float literal, anyway?
In C, you have to platform-specifically construct it, bitwise, in integer buffers, then cast. In D, I don't see a nan keyword, so I imagine you'd have to do it much the same way. -RB
Oct 29 2001
parent Russell Borogove <kaleja estarcion.com> writes:
Russell Borogove wrote:
 
 "Sean L. Palmer" wrote:
 How would you go about specifying a NAN float literal, anyway?
In C, you have to platform-specifically construct it, bitwise, in integer buffers, then cast. In D, I don't see a nan keyword, so I imagine you'd have to do it much the same way.
My error. The D spec shows the "float.nan" construct. http://www.digitalmars.com/d/property.html -RB
Oct 31 2001
prev sibling parent reply "Walter" <walter digitalmars.com> writes:
"Sean L. Palmer" <spalmer iname.com> wrote in message
news:9ri9ss$1t33$1 digitaldaemon.com...
 "Walter" <walter digitalmars.com> wrote in message
 news:9n3s54$oqj$1 digitaldaemon.com...
 D's support for floating point will be better than C99's. For example,
real
 numbers are by default initialized to NAN's, instead of to 0 or some
random
 bit pattern.
I'm curious what the rationale for this decision was. Seems to make more sense to default initialize floats to zero, not NAN... if for no other reason than that the ints are all default-initialized to zero.
By defaulting them to nan, then it forces the programmer to initialize them to something intended (as nan's will propagate through to any final result).
Jan 01 2002
parent reply "Robert W. Cunningham" <rwc_2001 yahoo.com> writes:
Walter wrote:

 "Sean L. Palmer" <spalmer iname.com> wrote in message
 news:9ri9ss$1t33$1 digitaldaemon.com...
 "Walter" <walter digitalmars.com> wrote in message
 news:9n3s54$oqj$1 digitaldaemon.com...
 D's support for floating point will be better than C99's. For example,
real
 numbers are by default initialized to NAN's, instead of to 0 or some
random
 bit pattern.
I'm curious what the rationale for this decision was. Seems to make more sense to default initialize floats to zero, not NAN... if for no other reason than that the ints are all default-initialized to zero.
By defaulting them to nan, then it forces the programmer to initialize them to something intended (as nan's will propagate through to any final result).
And if integers had the equivalent of NAN, we'd be using that as well. Same goes with infinity. And the same applies to booleans, where some logic systems have "true", "false" and "undefined". More and more we need our numeric representation systems (and all type systems for that matter) to contain additional state information, so we may then gain greater confidence in the results of calculations of all kinds. The notion of setting and using error values, or throwing exceptions, adds grossly too much "baggage" to the system, and is thus often ignored, or at least very much underused. So, our floats can tell us "I am not a valid float value" with several shades of meaning. But conventional integers and booleans lack this ability. IMHO, this is the main drive toward "pure" OO type systems (including "typeless" type systems), where all kinds of "other" information may be bundled as part of the "fundamental type". Every type should have an associated state field with values like "valid", "invalid", and any other extreme states that may need to be represented. (From this perspective, NAN is a kluge! But it is a huge step in the right direction.) Such state information needs to be part of the type itself, and not a part of some completely separate error or exception system. We see this happening with all higher types in all "modern" languages, and the level of this support has been steadily percolating down the type hierarchy. Consider character strings as an ideal example. D has decided to bite the bullet and make "smarter strings" part of the language. In the math domain this support can be pushed all the way to the hardware level. While ALUs have always had various condition codes to reflect the status of the result of operations, newer CPU/FPU/ALU architectures have independent condition code bits for each and every register. I'd rather not wait for the hardware to force the software to support "smarter" fundamental types. All "math" operations in "modern" high-level languages should use smarter fundamental types. Objections to such systems arise from two primary communities: "Bit-twiddling" and "pointer math". Bit-twiddling, should be implemented via bit vectors or some other form of collected bits, and not overlaid with the numeric type system. And "pointer math" needs to be part of the "pointer type", and NOT overlaid with the rest of the integer numeric system. Explicit casting can be used to convert between the different domains (though it should only be needed to interface with external environments and systems that lack a robust type system). And yes, they are different domains! Can this be done efficiently? Efficiently enough to support programming to the "bare metal"? In the era of multi-gigahertz CPUs, I'm certain the answer is "yes". There will always be resource-starved domains where "smart types" will not be applicable, such as the 64KB address space of an 8-bit processor. For such uses we will always have "legacy" languages such as C, and even assembler. For everything else, including D, I'd very much like to see a "sane" type system top to bottom, especially where "fundamental" numeric types are concerned. We should not be forced to use heavyweight error and exception systems, or tortuous explicit program logic, to support "problems" encountered when using "fundamental" types! The type system itself should provide more help in this area. And that's my $0.02. What's yours? -BobC
Jan 01 2002
next sibling parent reply Russell Borogove <kaleja estarcion.com> writes:
Robert W. Cunningham wrote:

 (Lots of stuff on additional state on fundamental types snipped)
 Can this be done efficiently?  Efficiently enough to support programming to the
 "bare metal"?  In the era of multi-gigahertz CPUs, I'm certain the answer is
 "yes".  There will always be resource-starved domains where "smart types" will
 not be applicable, such as the 64KB address space of an 8-bit processor.  For
 such uses we will always have "legacy" languages such as C, and even assembler.
As a console game programmer, I find that no matter how big the system gets, the demands of some applications will leave you resource-starved - be it a 64K address space on an 8-bit processor like the original Nintendo system, or 32MB on a 32/64-bit processor such as the Playstation 2. I want the convenience of some of D's constructs (okay, actually, I want the dynamic arrays and the associative arrays and anything else can go hang) without giving up too much efficiency. In fact, I'm not gonna be able to use D for realtime games for the foreseeable future, because of the GC time hit.
 For everything else, including D, I'd very much like to see a "sane" type
 system top to bottom, especially where "fundamental" numeric types are
 concerned.  We should not be forced to use heavyweight error and exception
 systems, or tortuous explicit program logic, to support "problems" encountered
 when using "fundamental" types!  The type system itself should provide more
 help in this area.
It has to be optional. Period. int32 for a raw integer, Int32Object for a smart type that supports introspection and whatnot. Otherwise, the overhead of allocating a bunch of 'em in an array is unacceptable.
 And that's my $0.02.  What's yours?
There it is. :) -RB
Jan 01 2002
parent reply "Pavel Minayev" <evilone omen.ru> writes:
"Russell Borogove" <kaleja estarcion.com> wrote in message
news:3C3227F9.40902 estarcion.com...

 efficiency. In fact, I'm not gonna be able to use D for
 realtime games for the foreseeable future, because of the
 GC time hit.
After some investigation, I came to the thought that D can actually be used for games and other things like that. The reason is that you can control the process of GCing quite well - look at gc.d in Phobos for a full list. Most important, you can disable() the GC, and then, when YOU (and not the GC!) want to collect the garbage, you call enable(), genCollect(), disable(). fullCollect() can be called occasionaly as well. This way, you prevent it from being run in the middle of drawing operation, for example... another cool thing is runDestructors() which calls all pending destructors for deleted objects - this makes it possible to control the order of destruction, as requested in some earlier thread. Of course, these all are mostly guesses, only Walter can tell for sure if they work or not.
Jan 02 2002
parent "Walter" <walter digitalmars.com> writes:
They do work now, it's just that I haven't got the API exposed into D yet.

I don't think that a gc should preclude a great interactive game. Not only
can you control when gc happens, if you have a task that cannot be
interrupted for gc, you can "preallocate" all the data you'll need. The gc
is not going to randomly interrupt your program to do a gc. The gc happens
synchronously with an attempt to allocate memory.


"Pavel Minayev" <evilone omen.ru> wrote in message
news:a0vlet$26ha$1 digitaldaemon.com...
 "Russell Borogove" <kaleja estarcion.com> wrote in message
 news:3C3227F9.40902 estarcion.com...

 efficiency. In fact, I'm not gonna be able to use D for
 realtime games for the foreseeable future, because of the
 GC time hit.
After some investigation, I came to the thought that D can actually be used for games and other things like that. The reason is that you can control the process of GCing quite well - look at gc.d in Phobos for a full list. Most important, you can disable() the GC, and then, when YOU (and not the GC!) want to collect the garbage, you call enable(), genCollect(), disable(). fullCollect() can be called occasionaly as well. This way, you prevent it from being run in the middle of drawing operation, for example... another cool thing is runDestructors() which calls all pending destructors for deleted objects - this makes it possible to control the order of destruction, as requested in some earlier thread. Of course, these all are mostly guesses, only Walter can tell for sure if they work or not.
Jan 07 2002
prev sibling next sibling parent la7y6nvo shamko.com writes:
 IMHO, this is the main drive toward "pure" OO type systems (including
 "typeless" type systems), where all kinds of "other" information may
 be bundled as part of the "fundamental type".  Every type should have
 an associated state field with values like "valid", "invalid", and any
 other extreme states that may need to be represented.  (From this
 perspective, NAN is a kluge!  But it is a huge step in the right
 direction.)
I think it's important to make a comment here. It will help the discussion if people will start to distinguish between "type" and "class" (or as it is sometimes called, "run time type"). "Type" is a compile time notion - syntactic elements in the language have a "type". "Class" is a run time notion - values at run time have a "class" (or, if you prefer, a "run time type"). The two notions are fundamentally different. In early programming languages there was a one-to-one correspondence between compile time types and run time types, so it made sense to equate the two. In newer languages, and in particular any language that includes some form of inheritance, there is no longer a simple correspondence between the two notions. Thus it has become important to distinguish between them.
Jan 01 2002
prev sibling next sibling parent reply "Sean L. Palmer" <spalmer iname.com> writes:
IMHO, numerical operations that result in a NAN ought to throw an exception.
NANs are errors, plain and simple.

By default, we want numbers to mean something, even if it's 'zilch'.  Zero.
Zip.  Nada.  More similar to the way ints work.

If you're going to specify a default, have it be something useful, not
something that forces you to override the default.  What good is the
bleeping default then?  Just make it a compile time error not to explicitly
initialize a float if that's what you're after.

Sean

"Robert W. Cunningham" <rwc_2001 yahoo.com> wrote in message
news:3C323036.C7E6802F yahoo.com...
 Walter wrote:

 "Sean L. Palmer" <spalmer iname.com> wrote in message
 news:9ri9ss$1t33$1 digitaldaemon.com...
 "Walter" <walter digitalmars.com> wrote in message
 news:9n3s54$oqj$1 digitaldaemon.com...
 D's support for floating point will be better than C99's. For
example,
 real
 numbers are by default initialized to NAN's, instead of to 0 or some
random
 bit pattern.
I'm curious what the rationale for this decision was. Seems to make
more
 sense to default initialize floats to zero, not NAN... if for no other
 reason than that the ints are all default-initialized to zero.
By defaulting them to nan, then it forces the programmer to initialize
them
 to something intended (as nan's will propagate through to any final
result).
 And if integers had the equivalent of NAN, we'd be using that as well.
Same
 goes with infinity.  And the same applies to booleans, where some logic
systems
 have "true", "false" and "undefined".

 More and more we need our numeric representation systems (and all type
systems
 for that matter) to contain additional state information, so we may then
gain
 greater confidence in the results of calculations of all kinds.  The
notion of
 setting and using error values, or throwing exceptions, adds grossly too
much
 "baggage" to the system, and is thus often ignored, or at least very much
 underused.

 So, our floats can tell us "I am not a valid float value" with several
shades
 of meaning.  But conventional integers and booleans lack this ability.

 IMHO, this is the main drive toward "pure" OO type systems (including
 "typeless" type systems), where all kinds of "other" information may be
bundled
 as part of the "fundamental type".  Every type should have an associated
state
 field with values like "valid", "invalid", and any other extreme states
that
 may need to be represented.  (From this perspective, NAN is a kluge!  But
it is
 a huge step in the right direction.)

 Such state information needs to be part of the type itself, and not a part
of
 some completely separate error or exception system.  We see this happening
with
 all higher types in all "modern" languages, and the level of this support
has
 been steadily percolating down the type hierarchy.  Consider character
strings
 as an ideal example.  D has decided to bite the bullet and make "smarter
 strings" part of the language.

 In the math domain this support can be pushed all the way to the hardware
 level.  While ALUs have always had various condition codes to reflect the
 status of the result of operations, newer CPU/FPU/ALU architectures have
 independent condition code bits for each and every register.  I'd rather
not
 wait for the hardware to force the software to support "smarter"
fundamental
 types.  All "math" operations in "modern" high-level languages should use
 smarter fundamental types.

 Objections to such systems arise from two primary communities:
"Bit-twiddling"
 and "pointer math".  Bit-twiddling, should be implemented via bit vectors
or
 some other form of collected bits, and not overlaid with the numeric type
 system.  And "pointer math" needs to be part of the "pointer type", and
NOT
 overlaid with the rest of the integer numeric system.  Explicit casting
can be
 used to convert between the different domains (though it should only be
needed
 to interface with external environments and systems that lack a robust
type
 system).  And yes, they are different domains!

 Can this be done efficiently?  Efficiently enough to support programming
to the
 "bare metal"?  In the era of multi-gigahertz CPUs, I'm certain the answer
is
 "yes".  There will always be resource-starved domains where "smart types"
will
 not be applicable, such as the 64KB address space of an 8-bit processor.
For
 such uses we will always have "legacy" languages such as C, and even
assembler.
 For everything else, including D, I'd very much like to see a "sane" type
 system top to bottom, especially where "fundamental" numeric types are
 concerned.  We should not be forced to use heavyweight error and exception
 systems, or tortuous explicit program logic, to support "problems"
encountered
 when using "fundamental" types!  The type system itself should provide
more
 help in this area.


 And that's my $0.02.  What's yours?

 -BobC
Jan 02 2002
next sibling parent reply "Pavel Minayev" <evilone omen.ru> writes:
"Sean L. Palmer" <spalmer iname.com> wrote in message
news:a0uje1$1ipe$1 digitaldaemon.com...

 IMHO, numerical operations that result in a NAN ought to throw an
exception.
 NANs are errors, plain and simple.

 By default, we want numbers to mean something, even if it's 'zilch'.
Zero.
 Zip.  Nada.  More similar to the way ints work.

 If you're going to specify a default, have it be something useful, not
 something that forces you to override the default.  What good is the
 bleeping default then?  Just make it a compile time error not to
explicitly
 initialize a float if that's what you're after.
Exceptions are quite resource-consuming. If every floating-point operation could potentially raise the exception... it would be slow and the code would be damn bloated. Of course you can strip it off in release version, but even then, since NAN is a hardware-supported feature, not special to D, and _very_ fast, it seems logical to still use it.
Jan 02 2002
parent "Robert W. Cunningham" <rwc_2001 yahoo.com> writes:
Pavel Minayev wrote:

 "Sean L. Palmer" <spalmer iname.com> wrote in message
 news:a0uje1$1ipe$1 digitaldaemon.com...

 IMHO, numerical operations that result in a NAN ought to throw an
exception.
 NANs are errors, plain and simple.

 By default, we want numbers to mean something, even if it's 'zilch'.
Zero.
 Zip.  Nada.  More similar to the way ints work.

 If you're going to specify a default, have it be something useful, not
 something that forces you to override the default.  What good is the
 bleeping default then?  Just make it a compile time error not to
explicitly
 initialize a float if that's what you're after.
Exceptions are quite resource-consuming. If every floating-point operation could potentially raise the exception... it would be slow and the code would be damn bloated. Of course you can strip it off in release version, but even then, since NAN is a hardware-supported feature, not special to D, and _very_ fast, it seems logical to still use it.
I've developed many industrial real-time embedded systems in which I wanted a particular algorithm or computation to always run to completion, then allow me to test the result. Very often, value errors can occur that the implementation of the algorithm can make an erroneous value "go away". So I need to manually test intermediate results (and trap hardware and library generated errors), set an error flag, and test that flag at the end. What a bother! I want the type itself to "know" when it has left the realm of validity, and subsequent calculations should correctly propagate that state through to the end without my having to write a single line of code. If you ignore floating point exceptions, things like NANs will propagate properly (at least in IEEE-conformant math libraries and ALUs) through all subsequent operations. But the same cannot be said for integers. And I have written (and continue to write) lots of code for integer math! Now, using lower-level "simple" integers for other purposes is an entirely different issue! I am not advocating the elimination of language support for the access and manipulation of such storage. I simply want robust and very efficient "smarter" integers to be available within SOME language. I feel it needs to be part of the language simply to ensure libraries written for that language can support it. Why not let D be that language? Writing "correct" algorithms in integer math is very different from doing the same in floating point. Order of operations is far more critical. Renormalization is needed quite often. And worst of all is the fact that error propagation in integer calculations is vastly different than it is in any floating point or real number domain. Doing all this to make the math correct, then having to layer cumbersome and slow manual checking and handling code can make even a relatively simple integer procedure a nightmare to implement, test and debug. In my experience, the majority of difficult calculations done in the integer domain are WRONG! I have personally corrected calculations in nuclear reactor control and instrumentation systems that had passed extensive NRC review and testing. Even the proper testing of integer math is often inadequately performed. I have seen integer code in medical instrumentation systems that was similarly flawed, but was not repaired simply because the code was "blessed" by the FDA (needless to say, I didn't accept that contract). I have even patched integer FFT libraries that had been in use for over 20 years, yet they crucial flaws all that time. Integer errors can be extremely tough to avoid, and even tougher to find. Having a language-native implementation of "smarter" integers would go a long, long way to help. And for things like games, the math is nearly always "wrong" anyway: Most games only need to be "close enough", so the standard for the code can be much lower (as it should be). Such applications, and many like them, will always need "dumb" integers just to get the job done. But medical, aviation, military and safety systems need something better. And it is way overdue. When the accuracy and timeliness of the result really matter, "smart" numeric types are the only way to go. I have implemented such systems in every fixed-point package I've developed, but even with extensive inlining, the performance isn't up to what a native-language implementation could provide. Given smart integers, I would write not only fast and robust integer algorithms, but I could also more easily create fast and robust fixed-point libraries. (And they are needed on a great many 32-bit processors that have embedded versions without an FPU, such as the MIPS, the ARM and Hitachi's excellent SH-1 and SH-2 families.) smartInt a = int(POSITIVE_INFINITY); // Imagine it came from a prior operation, such as an accidental division by zero smartInt b = 5; smartInt c = a + b; // What should c contain? I vote for POSITIVE_INFINITY! In a huge number of cases, floating point is used simply to avoid using integers. For many embedded systems, the loss of speed and increase in power consumption and cost caused by that decision can be a project and product killer. However, sometimes that's the only way to get the software done right and on time. Smarter integers as a native language element would go a long way toward remedying that situation. If anyone is interested, I can expound on why it is often important for even erroneous calculations to run to completion, but that's another issue for another time. I'm up to $0.04 now... -BobC
Jan 02 2002
prev sibling parent "Walter" <walter digitalmars.com> writes:
"Sean L. Palmer" <spalmer iname.com> wrote in message
news:a0uje1$1ipe$1 digitaldaemon.com...
 IMHO, numerical operations that result in a NAN ought to throw an
exception.
 NANs are errors, plain and simple.
In general, yes, but hardware support would be necessary to make that efficient. Can't really afford NAN check code after every floating point operation. There also are some legitimate uses for NANs in computation - for example, suppose you collect a matrix of data. Some of that data is unknown, you fill in those entries with NAN. Then, do your number crunching. The resulting computation will contain non-NAN for the valid results, and NANs for results that depended on missing data.
 By default, we want numbers to mean something, even if it's 'zilch'.
Zero.
 Zip.  Nada.  More similar to the way ints work.
If there was a NAN for ints, I'd use that too!
 If you're going to specify a default, have it be something useful, not
 something that forces you to override the default.  What good is the
 bleeping default then?  Just make it a compile time error not to
explicitly
 initialize a float if that's what you're after.
It's not always statically determinable if a variable is properly initialized before use or not.
Jan 03 2002
prev sibling parent reply "Walter" <walter digitalmars.com> writes:
"Robert W. Cunningham" <rwc_2001 yahoo.com> wrote in message
news:3C323036.C7E6802F yahoo.com...
 And that's my $0.02.  What's yours?

 -BobC
I think you wrote an excellent essay. -Walter
Jan 07 2002
parent "Robert W. Cunningham" <rcunning acm.org> writes:
Walter wrote:

 "Robert W. Cunningham" <rwc_2001 yahoo.com> wrote in message
 news:3C323036.C7E6802F yahoo.com...
 And that's my $0.02.  What's yours?

 -BobC
I think you wrote an excellent essay. -Walter
Thanks! But is there a chance it will affect D in any way? -BobC
Jan 08 2002