www.digitalmars.com         C & C++   DMDScript  

digitalmars.D - Decimal Type for D?

reply "Tony" <talktotony email.com> writes:
If D aims to be a popular mainstream language, then it must address the
needs of financial/commerce applications.

One of the fundamental types in this genre is the high-precision decimal
type.

I noticed that Walter argued the advantages of having types built-in rather
than based in libraries for complex numbers, and I assume that the argument
is equally valid for a decimal type.

Are there any plans to include a built-in decimal type in D?

Tony
May 08 2005
next sibling parent reply zwang <nehzgnaw gmail.com> writes:
Tony wrote:
 If D aims to be a popular mainstream language, then it must address the
 needs of financial/commerce applications.
 
 One of the fundamental types in this genre is the high-precision decimal
 type.
 
 I noticed that Walter argued the advantages of having types built-in rather
 than based in libraries for complex numbers, and I assume that the argument
 is equally valid for a decimal type.
 
 Are there any plans to include a built-in decimal type in D?
 
 Tony
 
 
 

I am writing a high-precision decimal type library for D according to the draft of IEEE 754R.
May 08 2005
parent "Tony" <talktotony email.com> writes:
"zwang" <nehzgnaw gmail.com> wrote in message
news:d5me6n$2r10$1 digitaldaemon.com...
 Tony wrote:
 If D aims to be a popular mainstream language, then it must address the
 needs of financial/commerce applications.

 One of the fundamental types in this genre is the high-precision decimal
 type.

 I noticed that Walter argued the advantages of having types built-in


 than based in libraries for complex numbers, and I assume that the


 is equally valid for a decimal type.

 Are there any plans to include a built-in decimal type in D?

 Tony

I am writing a high-precision decimal type library for D according to the draft of IEEE 754R.

Hi Zwang. That's good news. Is your library going to conform to a particular precision (Decimal32, Decimal64, Decimal128) or is it to be unlimited/variable precision? Also, this is such a fundamental capability; I wonder if it could be added to Phobos? Tony
May 09 2005
prev sibling next sibling parent reply Mike Parker <aldacron71 yahoo.com> writes:
Tony wrote:
 If D aims to be a popular mainstream language, then it must address the
 needs of financial/commerce applications.
 
 One of the fundamental types in this genre is the high-precision decimal
 type.
 
 I noticed that Walter argued the advantages of having types built-in rather
 than based in libraries for complex numbers, and I assume that the argument
 is equally valid for a decimal type.
 
 Are there any plans to include a built-in decimal type in D?

D has 3 floating point types: 32-bit float, 64-bit double, and a real type that is the largest floating point size on the current hardware platform (80-bit on Intel). So, what are you looking for beyond this?
May 08 2005
next sibling parent zwang <nehzgnaw gmail.com> writes:
Mike Parker wrote:
 Tony wrote:
 
 If D aims to be a popular mainstream language, then it must address the
 needs of financial/commerce applications.

 One of the fundamental types in this genre is the high-precision decimal
 type.

 I noticed that Walter argued the advantages of having types built-in 
 rather
 than based in libraries for complex numbers, and I assume that the 
 argument
 is equally valid for a decimal type.

 Are there any plans to include a built-in decimal type in D?

D has 3 floating point types: 32-bit float, 64-bit double, and a real type that is the largest floating point size on the current hardware platform (80-bit on Intel). So, what are you looking for beyond this?

"high-precision *decimal* type."
May 08 2005
prev sibling next sibling parent reply Kevin Bealer <Kevin_member pathlink.com> writes:
In article <d5medn$2r8t$1 digitaldaemon.com>, Mike Parker says...
Tony wrote:
 If D aims to be a popular mainstream language, then it must address the
 needs of financial/commerce applications.
 
 One of the fundamental types in this genre is the high-precision decimal
 type.
 
 I noticed that Walter argued the advantages of having types built-in rather
 than based in libraries for complex numbers, and I assume that the argument
 is equally valid for a decimal type.
 
 Are there any plans to include a built-in decimal type in D?

D has 3 floating point types: 32-bit float, 64-bit double, and a real type that is the largest floating point size on the current hardware platform (80-bit on Intel). So, what are you looking for beyond this?

:import std.stdio; : :int main() :{ : double e1 = 0.11; : double e2 = 0.39; : double e3 = 0.5; : : writefln("not equal, ", e3-(e1+e2)); : : return 0; :} I think a decimal type implies that numeric computations are done in base 10. If the above test was computed in base 10, it would display 0. This can be important if .11, .39, and .5 are parts of a dollar. I think a good (exchange) format for a generic number would be: <M,B,E> == mantissa, base, exponent. Normal floating point numbers (float, double, real) are a special case where the base is always 2. Decimal numbers would always use a base of 10. But you can represent other numerical idioms in the same form. Generally <M,B,E> is M * (B^E). double = <M, 2, E> // ordinary computer arithmetic decimal = <M, 10, E> // decimal (usually financial) fraction = <N, D, -1> // N/D; N=numerator, D=denominator By doing this, all +-*/ calculations can be exact if the inputs are; if you multiply 11/13 by .001, you get something like: <11,13,-1> * <1,10,-3> = <11,13000,-1>, ie exactly 11/13000. In normal "double" arithmetic, each of these operations loses precision. Of course, if you compute a fourier transform with this kind of number, it will be slow. And if you don't use some kind of BigInts to do this, at least one of the components of the number will be in danger of overflowing. If you don't want BigInts, you could specify a fixed size for each piece and start rounding off when the components got too big. Three longs should do a pretty good job. If you used 2 doubles and a long for the components, I think the results would still be exact for small values (doubles do not lose precision as long as they store integer values less than some maximum). When the values got too large, you would get the transition to imprecise reckoning without notice or intervention. Doubles will overflow eventually though, so some intervention is probably inevitable. When I say "intervention" here, I mean testing for overflow conditions and rounding the number when necessary. (For financial apps, overflow would probably want to throw an exception?) Kevin
May 09 2005
parent "TechnoZeus" <TechnoZeus PeoplePC.com> writes:
"Kevin Bealer" <Kevin_member pathlink.com> wrote in message
news:d5n37d$9vk$1 digitaldaemon.com...
 In article <d5medn$2r8t$1 digitaldaemon.com>, Mike Parker says...
Tony wrote:
 If D aims to be a popular mainstream language, then it must address the
 needs of financial/commerce applications.

 One of the fundamental types in this genre is the high-precision decimal
 type.

 I noticed that Walter argued the advantages of having types built-in rather
 than based in libraries for complex numbers, and I assume that the argument
 is equally valid for a decimal type.

 Are there any plans to include a built-in decimal type in D?

D has 3 floating point types: 32-bit float, 64-bit double, and a real type that is the largest floating point size on the current hardware platform (80-bit on Intel). So, what are you looking for beyond this?

:import std.stdio; : :int main() :{ : double e1 = 0.11; : double e2 = 0.39; : double e3 = 0.5; : : writefln("not equal, ", e3-(e1+e2)); : : return 0; :} I think a decimal type implies that numeric computations are done in base 10. If the above test was computed in base 10, it would display 0. This can be important if .11, .39, and .5 are parts of a dollar. I think a good (exchange) format for a generic number would be: <M,B,E> == mantissa, base, exponent. Normal floating point numbers (float, double, real) are a special case where the base is always 2. Decimal numbers would always use a base of 10. But you can represent other numerical idioms in the same form. Generally <M,B,E> is M * (B^E). double = <M, 2, E> // ordinary computer arithmetic decimal = <M, 10, E> // decimal (usually financial) fraction = <N, D, -1> // N/D; N=numerator, D=denominator By doing this, all +-*/ calculations can be exact if the inputs are; if you multiply 11/13 by .001, you get something like: <11,13,-1> * <1,10,-3> = <11,13000,-1>, ie exactly 11/13000. In normal "double" arithmetic, each of these operations loses precision. Of course, if you compute a fourier transform with this kind of number, it will be slow. And if you don't use some kind of BigInts to do this, at least one of the components of the number will be in danger of overflowing. If you don't want BigInts, you could specify a fixed size for each piece and start rounding off when the components got too big. Three longs should do a pretty good job. If you used 2 doubles and a long for the components, I think the results would still be exact for small values (doubles do not lose precision as long as they store integer values less than some maximum). When the values got too large, you would get the transition to imprecise reckoning without notice or intervention. Doubles will overflow eventually though, so some intervention is probably inevitable. When I say "intervention" here, I mean testing for overflow conditions and rounding the number when necessary. (For financial apps, overflow would probably want to throw an exception?) Kevin

What you are talking about is a simple form of a symbolic type. The problem with non-symbolic type precision is simple to explain by example if you consider the case of division by 3 or 7 in base 10. A symbolic type would represent 22/7 as a fraction, while a non-symbolic decimal type would represent it as 3.14285714285714285714285714285714... and a non-symbolic binary type would represent it as 11.001001001001001... to the number of digits allowed in the mantissa. Neither of these exactly represents the quantity of 22 divided by 7 because a finite mantissa can't hold the results in the base of the given exponent. A symbolic type would represent 4/3 as a fraction, while a non-symbolic decimal type would represent it as 1.33333333333333333333333333333333... and a non-symbolic binary type would represent it as 1.0101010101010101... to the number of digits allowed in the mantissa. Again, neither of these exactly represents the quantity of 4 divided by 3 because a finite mantissa can't hold the results in the base of the given exponent. The problem representing decimal values in a binary format is that the factors of 10 are 2 and 5, while 5 is not a factor of 2. To demonstrate this, we need to try division by 5 or by an integer multiple of 5 (such as 10, 20, or 100). A symbolic type would represent 7/5 as a fraction, while a non-symbolic decimal type would represent it as 1.4 and a non-symbolic binary type would represent it as 1.0110011001100110... to the number of digits allowed in the mantissa. Notice that in this case, the decimal representation does exactly represents the quantity of 7 divided by 5 but the binary representation does not, because a finite mantissa can't hold the results of dividing an integer by a prime number which is neither a factor of that integer or nor a factor of the exponent used to scale the mantissa. Other precision losses stem from a generalization of this limitation. TZ
May 16 2005
prev sibling parent reply =?ISO-8859-1?Q?Anders_F_Bj=F6rklund?= <afb algonet.se> writes:
Mike Parker wrote:

 D has 3 floating point types: 32-bit float, 64-bit double, and a real 
 type that is the largest floating point size on the current hardware 
 platform (80-bit on Intel). So, what are you looking for beyond this?

By the way (regarding non-decimal floating point): I ported the OpenEXR 16-bit "half" floating point format from C++ to D. (Basically just for storage, it's converted to "float" for calculations) And SoftFloat has support for 80-bit "extended" on non-Intel, and for 128-bit "quad" on all platforms. But it's slower than hardware, though. Not sure if support for "half" and "quad" is wanted in the D language ? But that is two (or three, with 80-bit) formats beyond what it has now. Then again, we already have GMP - with all kinds of precisions... http://home.comcast.net/~benhinkle/gmp-d/ --anders
May 09 2005
parent reply =?ISO-8859-1?Q?Jari-Matti_M=E4kel=E4?= <jmjmak utu.fi.no.sp.am> writes:
Anders F Björklund wrote:
 Mike Parker wrote:
 
 D has 3 floating point types: 32-bit float, 64-bit double, and a real
 type that is the largest floating point size on the current hardware
 platform (80-bit on Intel). So, what are you looking for beyond this?

By the way (regarding non-decimal floating point): I ported the OpenEXR 16-bit "half" floating point format from C++ to D. (Basically just for storage, it's converted to "float" for calculations) And SoftFloat has support for 80-bit "extended" on non-Intel, and for 128-bit "quad" on all platforms. But it's slower than hardware, though. Not sure if support for "half" and "quad" is wanted in the D language ? But that is two (or three, with 80-bit) formats beyond what it has now. Then again, we already have GMP - with all kinds of precisions... http://home.comcast.net/~benhinkle/gmp-d/ --anders

Is there a particular reason to use GMP instead of MAPM [1]? I've used MAPM for complex matrix calculations and it seems to be superior to GMP. [1] http://www.tc.umn.edu/~ringx004/mapm-main.html Jari-Matti
May 09 2005
next sibling parent reply =?ISO-8859-1?Q?Anders_F_Bj=F6rklund?= <afb algonet.se> writes:
Jari-Matti Mäkelä wrote:

Then again, we already have GMP - with all kinds of precisions...
http://home.comcast.net/~benhinkle/gmp-d/

Is there a particular reason to use GMP instead of MAPM [1]? I've used MAPM for complex matrix calculations and it seems to be superior to GMP.

No, just that there already was a D module for GMP available... Are you volunteering to provide an import module for MAPM ? ;-) --anders
May 09 2005
parent reply =?ISO-8859-1?Q?Jari-Matti_M=E4kel=E4?= <jmjmak utu.fi.no.sp.am> writes:
Anders F Björklund wrote:
 Jari-Matti Mäkelä wrote:
 
 Then again, we already have GMP - with all kinds of precisions...
 http://home.comcast.net/~benhinkle/gmp-d/

Is there a particular reason to use GMP instead of MAPM [1]? I've used MAPM for complex matrix calculations and it seems to be superior to GMP.

No, just that there already was a D module for GMP available... Are you volunteering to provide an import module for MAPM ? ;-) --anders

I did a D wrapper for the MAPM two years ago, pretty similar to that C++ -wrapper that comes with MAPM. I'm not aware of any "standards" with D modules already written so it might be something terrible - has been sufficient for my use though. If anyone wants, I can get it from my old machine, test it and put it on my home page. Jari-Matti
May 09 2005
parent =?ISO-8859-1?Q?Anders_F_Bj=F6rklund?= <afb algonet.se> writes:
Jari-Matti Mäkelä wrote:

Are you volunteering to provide an import module for MAPM ? ;-)

I did a D wrapper for the MAPM two years ago, pretty similar to that C++ -wrapper that comes with MAPM. I'm not aware of any "standards" with D modules already written so it might be something terrible - has been sufficient for my use though. If anyone wants, I can get it from my old machine, test it and put it on my home page.

Please do, I downloaded MAPM 4.9.2 and it worked like a charm here: "validate : PASS" The wrapper would probably be mapm.d and mapmextern.d, but if you've already ported the C++ class along with the macros - the better... :-) Rest of the m_apm.h header translated to D well enough automatically. (with h2d.pl) BTW: Thanks for posting this library. Looks great, and zlib license too! (GMP is under the LGPL, which means it must be dynamically linked) I might upload half.d and softfloat.d somewhere later on, as well. (for doing 16-bit, 80-bit and 128-bit floating point - in software) --anders
May 09 2005
prev sibling parent reply "Ben Hinkle" <bhinkle mathworks.com> writes:
 Is there a particular reason to use GMP instead of MAPM [1]? I've used
 MAPM for complex matrix calculations and it seems to be superior to GMP.

 [1] http://www.tc.umn.edu/~ringx004/mapm-main.html


 Jari-Matti

I've never used MAPM so I'm curious about more details. When you say superior I assume you mean superior performance? What system were you using?
May 09 2005
parent reply =?ISO-8859-1?Q?Jari-Matti_M=E4kel=E4?= <jmjmak utu.fi.no.sp.am> writes:
Ben Hinkle wrote:
Is there a particular reason to use GMP instead of MAPM [1]? I've used
MAPM for complex matrix calculations and it seems to be superior to GMP.

[1] http://www.tc.umn.edu/~ringx004/mapm-main.html


Jari-Matti

I've never used MAPM so I'm curious about more details. When you say superior I assume you mean superior performance? What system were you using?

I'm sorry, I've not yet made any extensive benchmarks. I meant that MAPM has all the necessary floating point functions integrated within the library. AFAIK GMP only implements some primitive div/mul/add/sub and get/set-style functions. MAPM uses the FFT method when multiplying big numbers, but I think GMP uses it nowadays too. Even though GMP might be faster, it still doesn't have for example any sin, cos, ln, times, etc. functions. I've tested the MAPM on my Gentoo Linux box with DMD and it has worked quite well. The use of huge integers isn't possible on my box with only 1,5GB of RAM. I think there is some work to do in MAPM with the error reporting system (exceptions should be used) and thread safety. Jari-Matti
May 09 2005
parent "Ben Hinkle" <bhinkle mathworks.com> writes:
"Jari-Matti Mäkelä" <jmjmak utu.fi.no.sp.am> wrote in message 
news:d5nui9$ujj$1 digitaldaemon.com...
 Ben Hinkle wrote:
Is there a particular reason to use GMP instead of MAPM [1]? I've used
MAPM for complex matrix calculations and it seems to be superior to GMP.

[1] http://www.tc.umn.edu/~ringx004/mapm-main.html


Jari-Matti

I've never used MAPM so I'm curious about more details. When you say superior I assume you mean superior performance? What system were you using?

I'm sorry, I've not yet made any extensive benchmarks. I meant that MAPM has all the necessary floating point functions integrated within the library.

ok no problem. I was curious what you meant.
 AFAIK GMP only implements some primitive div/mul/add/sub and
 get/set-style functions. MAPM uses the FFT method when multiplying big
 numbers, but I think GMP uses it nowadays too. Even though GMP might be
 faster, it still doesn't have for example any sin, cos, ln, times, etc.
 functions.

Yeah - that is annoying. The mpfr routines have various special functions but it's a pity the base mpf doesn't. I suspect the gmp authors are more interested in the integer than the floating point support.
 I've tested the MAPM on my Gentoo Linux box with DMD and it has worked
 quite well. The use of huge integers isn't possible on my box with only
 1,5GB of RAM. I think there is some work to do in MAPM with the error
 reporting system (exceptions should be used) and thread safety.


 Jari-Matti 

May 09 2005
prev sibling parent reply "Walter" <newshound digitalmars.com> writes:
"Tony" <talktotony email.com> wrote in message
news:d5m921$2n6v$2 digitaldaemon.com...
 If D aims to be a popular mainstream language, then it must address the
 needs of financial/commerce applications.

 One of the fundamental types in this genre is the high-precision decimal
 type.

 I noticed that Walter argued the advantages of having types built-in

 than based in libraries for complex numbers, and I assume that the

 is equally valid for a decimal type.

 Are there any plans to include a built-in decimal type in D?

If you use 64 bit longs to represent pennies, the max dollar amount representable would be $92,233,720,368,547,758 or 92 quadrillion dollars. Not even the federal government will overflow that anytime soon <g>.
May 09 2005
next sibling parent reply "Tony" <talktotony email.com> writes:
"Walter" <newshound digitalmars.com> wrote in message
news:d5o7uu$165o$1 digitaldaemon.com...
 "Tony" <talktotony email.com> wrote in message
 news:d5m921$2n6v$2 digitaldaemon.com...
 If D aims to be a popular mainstream language, then it must address the
 needs of financial/commerce applications.

 One of the fundamental types in this genre is the high-precision decimal
 type.

 I noticed that Walter argued the advantages of having types built-in

 than based in libraries for complex numbers, and I assume that the

 is equally valid for a decimal type.

 Are there any plans to include a built-in decimal type in D?

If you use 64 bit longs to represent pennies, the max dollar amount representable would be $92,233,720,368,547,758 or 92 quadrillion dollars. Not even the federal government will overflow that anytime soon <g>.

Integers are a very sub-optimal way of representing decimal values. The May 2004 issue of Dr Dobb's Journal has an article titled: "Java & Monetary Data" which addresses some of the issues with using either integers or floating point numbers for this purpose. The article is available online if you have a subscription. I feel quite bad about raising this as Walter has put so much work in to D already. A new data type is probably not something he would like to take on at this stage. However, I do believe it is important. For instance, I think programmers involved in e-commerce applications would consider the presence of a decimal type to be an indication that the language is trying to address their needs. Tony Melbourne, Australia
May 11 2005
parent reply "Walter" <newshound digitalmars.com> writes:
"Tony" <talktotony email.com> wrote in message
news:d5tgha$1cnk$1 digitaldaemon.com...
 "Walter" <newshound digitalmars.com> wrote in message
 If you use 64 bit longs to represent pennies, the max dollar amount
 representable would be $92,233,720,368,547,758 or 92 quadrillion


 Not even the federal government will overflow that anytime soon <g>.

The May 2004 issue of Dr Dobb's Journal has an article titled: "Java & Monetary Data" which addresses some of the issues with using either

 or floating point numbers for this purpose.  The article is available

 if you have a subscription.

Would you care to summarize?
May 11 2005
parent reply zwang <nehzgnaw gmail.com> writes:
Walter wrote:
 "Tony" <talktotony email.com> wrote in message
 news:d5tgha$1cnk$1 digitaldaemon.com...
 
"Walter" <newshound digitalmars.com> wrote in message

If you use 64 bit longs to represent pennies, the max dollar amount
representable would be $92,233,720,368,547,758 or 92 quadrillion


dollars.
Not even the federal government will overflow that anytime soon <g>.

Integers are a very sub-optimal way of representing decimal values. The May 2004 issue of Dr Dobb's Journal has an article titled: "Java & Monetary Data" which addresses some of the issues with using either

integers
or floating point numbers for this purpose.  The article is available

online
if you have a subscription.

Would you care to summarize?

quoted from the article: "The use of integer data types to represent monetary values suffers from a number of significant drawbacks. Integer data types are limited in the magnitude of values that they can represent. More importantly, doing arithmetic on scaled integer values (in particular, multiplication and division) can be messy and prone to programming errors. Intermediate results must be appropriately scaled to ensure the preservation of all significant digits, and you must keep track of where the implied decimal point is located. Because of these and other issues, integer data types are not a practical choice for use with monetary data."
May 11 2005
parent reply "Walter" <newshound digitalmars.com> writes:
"zwang" <nehzgnaw gmail.com> wrote in message
news:d5uc9e$21vb$1 digitaldaemon.com...
 quoted from the article:

 "The use of integer data types to represent monetary values suffers from a
 number of significant drawbacks. Integer data types are limited in the

 of values that they can represent. More importantly, doing arithmetic on

 integer values (in particular, multiplication and division) can be messy

 prone to programming errors. Intermediate results must be appropriately

 to ensure the preservation of all significant digits, and you must keep

 where the implied decimal point is located. Because of these and other

 integer data types are not a practical choice for use with monetary data."

I don't understand all the issues around a monetary data type. But the article is wrong on one important point - the scaling and keeping track of the decimal point. That would be true if you are doing fixed point math of the form 1.01*1.01, the result is 1.0201 and you have to remember to move the decimal point. But, and this is a big but, there aren't any financial calculations where you multiply a dollar times a dollar. If you are, you've "screwed up the dimensional analysis", as they say in physics. Dollars are multiplied by time, by a percent, by an interest rate, but never by another dollar. Of more interest, however, is overflow. When working with money, you really don't want the "silent, wraparound overflow" that happens with computer integer math. I read the article, and frankly I wouldn't use the guy's money class. I don't believe one can successfully do money calculations without understanding things like roundoff, precision, overflow, etc., and trying to paper over these things with a class just seems like a recipe for error.
May 11 2005
next sibling parent "Ben Hinkle" <ben.hinkle gmail.com> writes:
"Walter" <newshound digitalmars.com> wrote in message 
news:d5ugim$253t$1 digitaldaemon.com...
 "zwang" <nehzgnaw gmail.com> wrote in message
 news:d5uc9e$21vb$1 digitaldaemon.com...
 quoted from the article:

 "The use of integer data types to represent monetary values suffers from 
 a
 number of significant drawbacks. Integer data types are limited in the

 of values that they can represent. More importantly, doing arithmetic on

 integer values (in particular, multiplication and division) can be messy

 prone to programming errors. Intermediate results must be appropriately

 to ensure the preservation of all significant digits, and you must keep

 where the implied decimal point is located. Because of these and other

 integer data types are not a practical choice for use with monetary 
 data."

I don't understand all the issues around a monetary data type. But the article is wrong on one important point - the scaling and keeping track of the decimal point. That would be true if you are doing fixed point math of the form 1.01*1.01, the result is 1.0201 and you have to remember to move the decimal point. But, and this is a big but, there aren't any financial calculations where you multiply a dollar times a dollar. If you are, you've "screwed up the dimensional analysis", as they say in physics. Dollars are multiplied by time, by a percent, by an interest rate, but never by another dollar. Of more interest, however, is overflow. When working with money, you really don't want the "silent, wraparound overflow" that happens with computer integer math. I read the article, and frankly I wouldn't use the guy's money class. I don't believe one can successfully do money calculations without understanding things like roundoff, precision, overflow, etc., and trying to paper over these things with a class just seems like a recipe for error.

Agreed. This made me think of one of my favorite fixed-point quantities: the "scaled point" (sp) in TeX. A scaled point is (2^-16)*1point. Knuth says 100sp is roughly the wavelength of visible light and 2^32sp is about 18 feet. So in a 32bit int TeX measures distances from the wavelength of light to 18feet - that's a decent range! I tend to agree that a fixed point integer of 64 bits and an appropriate scaling factor should be enough to track any reasonable "real world" quantitiy (except for some crazy physics student who wants to add the width of an atom to a galaxy or something). Now whether money is part of the real world is another question.
May 11 2005
prev sibling parent reply MicroWizard <MicroWizard_member pathlink.com> writes:
I agree with you totally. I work for banks for years. The banks know how to deal
with money. NO rounding, NO overflow could happen unintentionally NEVER.

Some banks has buggy systems, but as far as I know, all of them KNOW these kind
of errors in monetary calculations. It's viable for them.

These posts and the article (which I was not able to read) seem only a
theoretical issue with no connection to the real world.

Tamas

Of more interest, however, is overflow. When working with money, you really
don't want the "silent, wraparound overflow" that happens with computer
integer math.

I read the article, and frankly I wouldn't use the guy's money class. I
don't believe one can successfully do money calculations without
understanding things like roundoff, precision, overflow, etc., and trying to
paper over these things with a class just seems like a recipe for error.

May 12 2005
parent reply Derek Parnell <dparnell admerex.com> writes:
On Thu, 12 May 2005 10:01:13 +0000 (UTC), MicroWizard wrote:

 I agree with you totally. I work for banks for years. The banks know how to
deal
 with money. NO rounding, NO overflow could happen unintentionally NEVER.

I agree. My company's main product is a Retail Banking system and all rounding and overflows are very carefully coded and accounted for. You get rounding issues when calculating interest and loan repayment amounts. -- Derek Parnell Technical Product Manager - RBX, PBX Admerex Solutions Pty Ltd Melbourne, AUSTRALIA 12/05/2005 8:36:48 PM
May 12 2005
parent reply "Walter" <newshound digitalmars.com> writes:
"Derek Parnell" <dparnell admerex.com> wrote in message
news:6scaj2gqf1s3$.xj18j8wulwh6$.dlg 40tude.net...
 On Thu, 12 May 2005 10:01:13 +0000 (UTC), MicroWizard wrote:

 I agree with you totally. I work for banks for years. The banks know how


 with money. NO rounding, NO overflow could happen unintentionally NEVER.

I agree. My company's main product is a Retail Banking system and all rounding and overflows are very carefully coded and accounted for. You get rounding issues when calculating interest and loan repayment amounts.

If anyone is in doubt about this, recall the story a few years ago about the bank programmer who adjusted the partial penny roundoffs to credit his personal account with the roundoff errors. Over time, this amounted to serious money ($millions), serious enough that it tripped him up.
May 12 2005
parent reply Sean Kelly <sean f4.ca> writes:
In article <d60m9n$12go$1 digitaldaemon.com>, Walter says...
"Derek Parnell" <dparnell admerex.com> wrote in message
news:6scaj2gqf1s3$.xj18j8wulwh6$.dlg 40tude.net...
 On Thu, 12 May 2005 10:01:13 +0000 (UTC), MicroWizard wrote:

 I agree with you totally. I work for banks for years. The banks know how


 with money. NO rounding, NO overflow could happen unintentionally NEVER.

I agree. My company's main product is a Retail Banking system and all rounding and overflows are very carefully coded and accounted for. You get rounding issues when calculating interest and loan repayment amounts.


When math errors quite literally cost the customer money, they want the code to be infallible. There's something about watching money literally disappear that drives accountants crazy :) In my experience, rounding error is much more of a problem with fixed decimal calculations than with floating point calculations--losing a fraction of a penny every step of the way adds up fast. I think it generally makes more sense to do floating point math and then figure out what to do with the fractional amount at the end.
If anyone is in doubt about this, recall the story a few years ago about the
bank programmer who adjusted the partial penny roundoffs to credit his
personal account with the roundoff errors. Over time, this amounted to
serious money ($millions), serious enough that it tripped him up.

FWIW, Hedge Funds typically designate a "rounding partner" that receives these fractional pennies. Someday maybe I'll do some data scraping and see what the average per-period rounding amount is for a typical partnership, but I can tell you right now that the number isn't small. Sean
May 12 2005
parent reply sai <sai_member pathlink.com> writes:
I worked for a famous financial company for one year, there, surprisingly all
transactions and processing of money were done using simple float type, in c++
as well as java. Rounding-off etc were taken care at the end. Just a note, the
company invests millions of dollars in mutual, hedge funds etc. and makes more
than 50% profit every year, atleast thats what we were told.

-Sai 


In article <d60otv$1495$1 digitaldaemon.com>, Sean Kelly says...
In article <d60m9n$12go$1 digitaldaemon.com>, Walter says...
"Derek Parnell" <dparnell admerex.com> wrote in message
news:6scaj2gqf1s3$.xj18j8wulwh6$.dlg 40tude.net...
 On Thu, 12 May 2005 10:01:13 +0000 (UTC), MicroWizard wrote:

 I agree with you totally. I work for banks for years. The banks know how


 with money. NO rounding, NO overflow could happen unintentionally NEVER.

I agree. My company's main product is a Retail Banking system and all rounding and overflows are very carefully coded and accounted for. You get rounding issues when calculating interest and loan repayment amounts.


When math errors quite literally cost the customer money, they want the code to be infallible. There's something about watching money literally disappear that drives accountants crazy :) In my experience, rounding error is much more of a problem with fixed decimal calculations than with floating point calculations--losing a fraction of a penny every step of the way adds up fast. I think it generally makes more sense to do floating point math and then figure out what to do with the fractional amount at the end.
If anyone is in doubt about this, recall the story a few years ago about the
bank programmer who adjusted the partial penny roundoffs to credit his
personal account with the roundoff errors. Over time, this amounted to
serious money ($millions), serious enough that it tripped him up.

FWIW, Hedge Funds typically designate a "rounding partner" that receives these fractional pennies. Someday maybe I'll do some data scraping and see what the average per-period rounding amount is for a typical partnership, but I can tell you right now that the number isn't small. Sean

May 12 2005
next sibling parent Hasan Aljudy <hasan.aljudy gmail.com> writes:
sai wrote:
 I worked for a famous financial company for one year, there, surprisingly all
 transactions and processing of money were done using simple float type, in c++
 as well as java. Rounding-off etc were taken care at the end. Just a note, the
 company invests millions of dollars in mutual, hedge funds etc. and makes more
 than 50% profit every year, atleast thats what we were told.
 
 -Sai 
 
 

That's scary!!!!
May 13 2005
prev sibling parent "TechnoZeus" <TechnoZeus PeoplePC.com> writes:
"sai" <sai_member pathlink.com> wrote in message
news:d60tq8$16rq$1 digitaldaemon.com...
 I worked for a famous financial company for one year, there, surprisingly all
 transactions and processing of money were done using simple float type, in c++
 as well as java. Rounding-off etc were taken care at the end. Just a note, the
 company invests millions of dollars in mutual, hedge funds etc. and makes more
 than 50% profit every year, atleast thats what we were told.

 -Sai


 In article <d60otv$1495$1 digitaldaemon.com>, Sean Kelly says...
In article <d60m9n$12go$1 digitaldaemon.com>, Walter says...
"Derek Parnell" <dparnell admerex.com> wrote in message
news:6scaj2gqf1s3$.xj18j8wulwh6$.dlg 40tude.net...
 On Thu, 12 May 2005 10:01:13 +0000 (UTC), MicroWizard wrote:

 I agree with you totally. I work for banks for years. The banks know how


 with money. NO rounding, NO overflow could happen unintentionally NEVER.

I agree. My company's main product is a Retail Banking system and all rounding and overflows are very carefully coded and accounted for. You get rounding issues when calculating interest and loan repayment amounts.


When math errors quite literally cost the customer money, they want the code to be infallible. There's something about watching money literally disappear that drives accountants crazy :) In my experience, rounding error is much more of a problem with fixed decimal calculations than with floating point calculations--losing a fraction of a penny every step of the way adds up fast. I think it generally makes more sense to do floating point math and then figure out what to do with the fractional amount at the end.
If anyone is in doubt about this, recall the story a few years ago about the
bank programmer who adjusted the partial penny roundoffs to credit his
personal account with the roundoff errors. Over time, this amounted to
serious money ($millions), serious enough that it tripped him up.

FWIW, Hedge Funds typically designate a "rounding partner" that receives these fractional pennies. Someday maybe I'll do some data scraping and see what the average per-period rounding amount is for a typical partnership, but I can tell you right now that the number isn't small. Sean


Actually, it's not difficult to keep absolute precision doing decimal math in binary types, as long as you keep track of the remainders when doing division, and account for them in other calculations. For example, 1.01 * 101% = 101 / 100 * 101 /100 = (1+1/100) * (1+1/100) = (1+1/100)*1 + (1+1/100)*1/100 = 1+1/100+1/100+1/100/100 = 1 + 2/100 + 1/10000 = 1 + .02 + .0001 = 1.0201 Therefore, $1.01 * 101% = $1.02 with a remainder of one tenthousandth of $1 or a hundredth of a cent. That hundredth of a cent can be stored as a fraction until it is needed in further calculations, facilitating a partial symbolic representation of numbers and no loss of precision within the scope of decimal percentage calculations. Crude example, but I think it demonstrates the point. TZ
May 16 2005
prev sibling parent reply Benji Smith <dlanguage xxagg.com> writes:
Walter wrote:
 "Tony" <talktotony email.com> wrote in message
 news:d5m921$2n6v$2 digitaldaemon.com...
 If you use 64 bit longs to represent pennies, the max dollar amount
 representable would be $92,233,720,368,547,758 or 92 quadrillion dollars.
 Not even the federal government will overflow that anytime soon <g>.

Representing currency in pennies just isn't good enough for all financial applications. I might need to accurately keep track of tenths or hundredths of a cent. Floating point types don't work because of accuracy problems. Integer types don't work because my application might work with cents, and a library that I use might work with tenths-of-a-cent. A decimal type would be able to provide the appropriate level of precision/accuracy, while remaining neutral about the number of digits on the right-hand side of the decimal point. Of course, decimal types could be implemented easily in a library, but I think I'd rather have them in the language (or, at the very least, in the standard library). --BenjiSmith
May 11 2005
parent reply "Walter" <newshound digitalmars.com> writes:
"Benji Smith" <dlanguage xxagg.com> wrote in message
news:d5tvu2$1ple$1 digitaldaemon.com...
 Walter wrote:
 "Tony" <talktotony email.com> wrote in message
 news:d5m921$2n6v$2 digitaldaemon.com...
 If you use 64 bit longs to represent pennies, the max dollar amount
 representable would be $92,233,720,368,547,758 or 92 quadrillion


 Not even the federal government will overflow that anytime soon <g>.

Representing currency in pennies just isn't good enough for all financial applications. I might need to accurately keep track of tenths or hundredths of a cent. Floating point types don't work because of accuracy problems. Integer types don't work because my application might work with cents, and a library that I use might work with tenths-of-a-cent. A decimal type would be able to provide the appropriate level of precision/accuracy, while remaining neutral about the number of digits on the right-hand side of the decimal point. Of course, decimal types could be implemented easily in a library, but I think I'd rather have them in the language (or, at the very least, in the standard library).

I obviously don't understand all the issues for such a type, so it would be difficult to design one. I suggest using a class for the time being.
May 11 2005
parent reply "TechnoZeus" <TechnoZeus PeoplePC.com> writes:
"Walter" <newshound digitalmars.com> wrote in message
news:d5ud1h$22h9$1 digitaldaemon.com...
 "Benji Smith" <dlanguage xxagg.com> wrote in message
 news:d5tvu2$1ple$1 digitaldaemon.com...
 Walter wrote:
 "Tony" <talktotony email.com> wrote in message
 news:d5m921$2n6v$2 digitaldaemon.com...
 If you use 64 bit longs to represent pennies, the max dollar amount
 representable would be $92,233,720,368,547,758 or 92 quadrillion


 Not even the federal government will overflow that anytime soon <g>.

Representing currency in pennies just isn't good enough for all financial applications. I might need to accurately keep track of tenths or hundredths of a cent. Floating point types don't work because of accuracy problems. Integer types don't work because my application might work with cents, and a library that I use might work with tenths-of-a-cent. A decimal type would be able to provide the appropriate level of precision/accuracy, while remaining neutral about the number of digits on the right-hand side of the decimal point. Of course, decimal types could be implemented easily in a library, but I think I'd rather have them in the language (or, at the very least, in the standard library).

I obviously don't understand all the issues for such a type, so it would be difficult to design one. I suggest using a class for the time being.

Designing a decimal type isn't really difficult at all. The only necessary stipulation is that the exponent be interpreted as an exponent of 10 rather than of 2. The real chore is in making all of the base 10 math functions to go with it. You mentioned the multiplication of 1.01 by 1.01 in another reply, but said that you could only see that happening if someone is multiplying dollar amounts by dollar amounts. Yet, you also mentioned percentages and interest rates. Interest rates are generally given in percentages, and a percentage is by definition some number divided by 100, such as 101% = 101/100 = 1.01 which in binary is roughly the binary value representet by 1.00 followed by a repeated "000010100011110101110000101000111101011100001010001111010111" taking an infinite number of mantissa digits to precisely represent in binary. In fact, the same problem exists with any percentage between (but not including) 0% and 100%, other than x% where x is an integer multiple of 25 or the product of such an integer and a negative integer power of 2. As much as I dislike the decimal number system, it is used a lot... so I agree that native support for decimal type would be good to have. Support for a symbolic type would be even better... but much harder to implement. I think a decimal type would be a reasonable compromise to consider for native support. - Just my opinion. TZ
May 16 2005
parent reply "Walter" <newshound digitalmars.com> writes:
"TechnoZeus" <TechnoZeus PeoplePC.com> wrote in message
news:d69l7d$1bo0$1 digitaldaemon.com...
 In fact, the same problem exists with any percentage between (but not

 other than x% where x is an integer multiple of 25 or the product of such

A simple solution to that problem: to get 6% of the value of n, n being in pennies: n = (n * 6) / 100;
May 16 2005
parent reply "TechnoZeus" <TechnoZeus PeoplePC.com> writes:
"Walter" <newshound digitalmars.com> wrote in message
news:d6bk8g$5ra$1 digitaldaemon.com...
 "TechnoZeus" <TechnoZeus PeoplePC.com> wrote in message
 news:d69l7d$1bo0$1 digitaldaemon.com...
 In fact, the same problem exists with any percentage between (but not

 other than x% where x is an integer multiple of 25 or the product of such

A simple solution to that problem: to get 6% of the value of n, n being in pennies: n = (n * 6) / 100;

Yep... provided you don't mind rounding the binary results to the nearest cent. Let's try that in binary... for n=1 cent: 1 * 110 = 110; 110 / 1100100=0.00001111010111000010100011110... Okay, let's not. Hehe. Yes, it's better to do the multiplication before the devision when loss of accuracy is more of a concern than overflow, but without an infinitely large mantissa, there will always be limitations when having to use one number system to represent another. No, actually, it's not a simple solution. It's a workaround... and in general, it works. However, it's not 100% accurate. Binary math is simple, and computers tend to handle it quite well for just that reason. However, while any base 2 number of finite length can be represented as a base 10 number of finite length, the reverse is not true. Many finite length base 10 numbers require an infinite number of base 2 digits to represent precisely. A base system like base 510510 or base 9699690 in turn has the same relation to base 10 in that respect as base 10 has to base 2. In fact, base 9699690 can represent in a finite number of digits any number that can be represented in a finite number of digits in any base that is not an integer multiple of a prime number greater than 20. The down side is that calculations in such a base tend to be much less efficient on modern computers than the same calculations performed in the binary system. This is why for decades now many processors have had native support for binary coded decimal. A binary coded decimal format represents the mantissa of a number in base 10 digits, each of which are stored in a nibble (4 bits) of memory, and uses an exponent that is treated as a signed power of 10. Usually, the digits are actually hexadecimal digits with a functional range from 0 through F, but only the values 0 through 9 are needed since their place values are treated as pawers of ten. There are also cases where the digit values A through F are invalid or have special meanings, or where 3 decimal digits are packed together in a 10 bit field and treated together a single base 1000 digit (using the range of 000 through 999 out of the full 0 through 1023 binary range), with the place values of such compound digits being treated as powers of 1000. The unfortunate thing about all this, is that while base 2 representations have been standardized rather uniformly, base 10 encodings have not. Therefore, a base 10 number format that is supported directly by a microporcessor will generally tend to be platform specific. However, standards have been created, and are currently being updated and revised. Zwang is on the right track. If anyone was wondering what IEEE 754r is, check out the Wikipedia page... http://en.wikipedia.org/wiki/IEEE_754r Walter, if you're interested in information on IEEE 754r specifically for programming language developers (I'm guessing, zwang already has this link) check out... http://en.wikipedia.org/wiki/IEEE_754r/Annex_L TZ
May 16 2005
parent reply "Walter" <newshound digitalmars.com> writes:
"TechnoZeus" <TechnoZeus PeoplePC.com> wrote in message
news:d6c02l$l28$1 digitaldaemon.com...
 "Walter" <newshound digitalmars.com> wrote in message

 "TechnoZeus" <TechnoZeus PeoplePC.com> wrote in message
 news:d69l7d$1bo0$1 digitaldaemon.com...
 In fact, the same problem exists with any percentage between (but not

 other than x% where x is an integer multiple of 25 or the product of



 an integer and a negative integer power of 2.

 A simple solution to that problem: to get 6% of the value of n, n being


 pennies:

     n = (n * 6) / 100;

Yep... provided you don't mind rounding the binary results to the nearest

But rounding money to cents is what you want. Otherwise, you'd use floating point. And yes, I know about BCD. After all, there were special opcodes for it even in the earliest 8088!
May 20 2005
next sibling parent reply John Reimer <brk_6502 yahoo.com> writes:
Walter wrote:

 And yes, I know about BCD. After all, there were special opcodes for it even
 in the earliest 8088!
 
 
 

And the MOS 6502/6510 :-)
May 20 2005
parent "TechnoZeus" <TechnoZeus PeoplePC.com> writes:
"John Reimer" <brk_6502 yahoo.com> wrote in message
news:d6lmuf$1obs$1 digitaldaemon.com...
 Walter wrote:

 And yes, I know about BCD. After all, there were special opcodes for it even
 in the earliest 8088!

And the MOS 6502/6510 :-)

Yep. :) I wrote my first assembler on a 6502 ... using a notation that I later sent to Motorola along with the plans for the 68000 family. It was an unusual compiler, that used built in macros to simulate a more capable microprocessor with numbered data registers and flexible use of multiple addressing modes. Nice memories... what little is left of them. Hehe. TZ
May 20 2005
prev sibling parent "TechnoZeus" <TechnoZeus PeoplePC.com> writes:
"Walter" <newshound digitalmars.com> wrote in message
news:d6lmqt$1obr$1 digitaldaemon.com...
 "TechnoZeus" <TechnoZeus PeoplePC.com> wrote in message
 news:d6c02l$l28$1 digitaldaemon.com...
 "Walter" <newshound digitalmars.com> wrote in message

 "TechnoZeus" <TechnoZeus PeoplePC.com> wrote in message
 news:d69l7d$1bo0$1 digitaldaemon.com...
 In fact, the same problem exists with any percentage between (but not

 other than x% where x is an integer multiple of 25 or the product of



 an integer and a negative integer power of 2.

 A simple solution to that problem: to get 6% of the value of n, n being


 pennies:

     n = (n * 6) / 100;

Yep... provided you don't mind rounding the binary results to the nearest

But rounding money to cents is what you want. Otherwise, you'd use floating point. And yes, I know about BCD. After all, there were special opcodes for it even in the earliest 8088!

Yes, you want rounding to the nearst cent... "eventually"... but doing so in every calculation can lead to an accumulated innacuracy of more than a cent... which can cause errors in transactions, which in turn can add up. That's "why" BCD exists... and why it has existed for so long. Because that type of cumulative inacuracy isn't always acceptable. TZ
May 20 2005