www.digitalmars.com         C & C++   DMDScript  

digitalmars.D - fixed point support?

reply parabolis <parabolis softhome.net> writes:
I am just curious why D has no primitive types corresponding to 
fixed point values?
Jul 30 2004
parent reply Norbert Nemec <Norbert Nemec-online.de> writes:
parabolis wrote:

 I am just curious why D has no primitive types corresponding to
 fixed point values?
Guess, it's just a rather specialized topic, so it would at best stand in line with thousands of other topics that might be interesting to have. Is there any fundamental problem in implementing it in the library?
Jul 31 2004
parent reply parabolis <parabolis softhome.net> writes:
Norbert Nemec wrote:

 parabolis wrote:
 
I am just curious why D has no primitive types corresponding to
fixed point values?
Guess, it's just a rather specialized topic, so it would at best stand in line with thousands of other topics that might be interesting to have. Is there any fundamental problem in implementing it in the library?
I believe that there is. (I am still working out all the low level details so a correction of any of the following that is either dead wrong or possibly misleading would be nice) One of the main reasons for defining fixed point primitives would be that they can do non-integer math much faster than floating point. However to override the operators one must wrap a fixed point value in a class and call the op???() functions which cam eliminate the faster operation: 1) Creating a new object would allocate memory via the garbage collector as opposed to the (at most) single 7 (?) cycle opcode that creates space on the stack for primitive types. This is not a serious problem by itself. 2) Heap memory is used instead of stack memory. Heap memory is fragmented and thus less likely to already be in the cache. Primitives are allocated on the top of the stack which is much more likely to be in the cache. 3) Each operator is now wrapped in a function call which requires overhead in setting up a /virtual/ function call. Should either the virtual table or the function body itself not be in cache. 4) The biggest problem however is that statements other than op???Assign must create a new instance of the class and so 1) and 2) from above are multiplied by the number of expressions in an expression tree: e -= (a + b) * (c - d); (3 new classes created) 5) A compiler knows that if the example in 4) is an expression of primitive types then it can immediately throw away temporary values. So newly created temporary objects are used in the same space. However the garbage collector's decision to reuse the memory taken up by these will be much more complex. So 2) is confounded even more. To actually get a performance benefit from one would have to use functions directly and so: e -= (a + b) * (c - d); becomes: fixedOpSubAssign( e, fixedOpMul( fixedOpAdd( a, b) fixedOpSub( c, d) ) ); So actually getting /some/ of the perfomance benefit kills the readability of the code. Note even this solution is not guaranteed to be any faster than floating point. The non-virtual function bodies themselves might not be in the cache and even when they are each operation must setup and destroy a stack frame which can still double or triple the best case scenarios.
Aug 01 2004
next sibling parent reply Ben Hinkle <bhinkle4 juno.com> writes:
operator overloading works for structs, too. I'd try something like

struct fix {
  private int x;
  static fix opCall(int n) {
    fix res;
    res.x = n<<16;
    return res;
  }
  static fix opCall(double n) {
    fix res;
    res.x = cast(int)(n*(1<<16));
    return res;
  }
  fix opAdd(fix b) {
    fix res;
    res.x = x+b.x;
    return res;
  }
  fix opMul(fix b) {
    fix res;
    long x1 = cast(long)x * b.x;
    res.x = cast(int)(x1 >> 16);
    return res;
  }
  double opCast() { 
    return 1.0*x/(1<<16); 
  }
}
int main(){
  fix x = fix(1);
  fix y = fix(2);
  fix z = x+y;
  printf("%g + %g = %g\n",
         cast(double)x,
         cast(double)y,
         cast(double)z);
  fix w = (x+y)*fix(.5);
  printf("(%g + %g)*.5 = %g\n",
         cast(double)x,
         cast(double)y,
         cast(double)w);
  return 0;
}


parabolis wrote:

 Norbert Nemec wrote:
 
 parabolis wrote:
 
I am just curious why D has no primitive types corresponding to
fixed point values?
Guess, it's just a rather specialized topic, so it would at best stand in line with thousands of other topics that might be interesting to have. Is there any fundamental problem in implementing it in the library?
I believe that there is. (I am still working out all the low level details so a correction of any of the following that is either dead wrong or possibly misleading would be nice) One of the main reasons for defining fixed point primitives would be that they can do non-integer math much faster than floating point. However to override the operators one must wrap a fixed point value in a class and call the op???() functions which cam eliminate the faster operation: 1) Creating a new object would allocate memory via the garbage collector as opposed to the (at most) single 7 (?) cycle opcode that creates space on the stack for primitive types. This is not a serious problem by itself. 2) Heap memory is used instead of stack memory. Heap memory is fragmented and thus less likely to already be in the cache. Primitives are allocated on the top of the stack which is much more likely to be in the cache. 3) Each operator is now wrapped in a function call which requires overhead in setting up a /virtual/ function call. Should either the virtual table or the function body itself not be in cache. 4) The biggest problem however is that statements other than op???Assign must create a new instance of the class and so 1) and 2) from above are multiplied by the number of expressions in an expression tree: e -= (a + b) * (c - d); (3 new classes created) 5) A compiler knows that if the example in 4) is an expression of primitive types then it can immediately throw away temporary values. So newly created temporary objects are used in the same space. However the garbage collector's decision to reuse the memory taken up by these will be much more complex. So 2) is confounded even more. To actually get a performance benefit from one would have to use functions directly and so: e -= (a + b) * (c - d); becomes: fixedOpSubAssign( e, fixedOpMul( fixedOpAdd( a, b) fixedOpSub( c, d) ) ); So actually getting /some/ of the perfomance benefit kills the readability of the code. Note even this solution is not guaranteed to be any faster than floating point. The non-virtual function bodies themselves might not be in the cache and even when they are each operation must setup and destroy a stack frame which can still double or triple the best case scenarios.
Aug 01 2004
parent reply parabolis <parabolis softhome.net> writes:
Ben Hinkle wrote:

 operator overloading works for structs, too. I'd try something like
 
I actually considered that approach but I was worried that I might run into problems similar to cases in C++ in which you have a class instantiated automatically on the stack and attempt to return it but the object is deallocated when it goes out of scope. I was not sure when I started whether what I had in mind would be possible so I decided to work out that issue after I verified the possibility. Also it still does not address the issue that the function call overhear destroys the speed bonus of using fixed vs floating point does it?
Aug 01 2004
parent reply Ben Hinkle <bhinkle4 juno.com> writes:
parabolis wrote:

 Ben Hinkle wrote:
 
 operator overloading works for structs, too. I'd try something like
 
I actually considered that approach but I was worried that I might run into problems similar to cases in C++ in which you have a class instantiated automatically on the stack and attempt to return it but the object is deallocated when it goes out of scope. I was not sure when I started whether what I had in mind would be possible so I decided to work out that issue after I verified the possibility.
Structs are passed by value and the entire struct is a single int so the whole thing should fit in a register and shouldn't be any different than an int.
 Also it still does not address the issue that the function call
 overhear destroys the speed bonus of using fixed vs floating
 point does it?
The function calls should be inlined. Since structs don't have dynamic dispatching the exact function is known at compile time. The optimizations that get performed on the inlined fcns is unknown, though. My guess is that it's still faster than floating point. It wouldn't be hard to try adding/multiplying a few hundred thousand floats and fix'es and see how they compare.
Aug 01 2004
parent reply parabolis <parabolis softhome.net> writes:
Ben Hinkle wrote:

 parabolis wrote:
 
 
Ben Hinkle wrote:


operator overloading works for structs, too. I'd try something like
I actually considered that approach but I was worried that I might run into problems similar to cases in C++ in which you have a class instantiated automatically on the stack and attempt to return it but the object is deallocated when it goes out of scope. I was not sure when I started whether what I had in mind would be possible so I decided to work out that issue after I verified the possibility.
Structs are passed by value and the entire struct is a single int so the whole thing should fit in a register and shouldn't be any different than an int.
(I am only asking this because you seen to have compiler knowledge that I am interested in obtaining...) A struct with a single int would work for register passing however in the general case you cannot define returning structs from functions making an assumption that they can be passed via registers. Would a compiler identify instances where register and non-register returns are possible?
 
Also it still does not address the issue that the function call
overhear destroys the speed bonus of using fixed vs floating
point does it?
The function calls should be inlined. Since structs don't have dynamic dispatching the exact function is known at compile time. The optimizations that get performed on the inlined fcns is unknown, though. My guess is that it's still faster than floating point. It wouldn't be hard to try adding/multiplying a few hundred thousand floats and fix'es and see how they compare.
That is a good point. Thanks for the reply because I have learned a bit more about compilers. However I do not believe DMD does such optimizations. I compiled the example you provided before in ex.d, compiled and ran: ran obj2asm ex.obj > ex.asm A highlight includes: ========================== ex.asm ========================== <snip> _D5ex3fix6opCallFdZS5ex3fix comdat assume CS:_D5ex43fix6opCallFdZS5ex3fix enter 4,0 push EDI <snip> leave ret 8 <snip> ================================================================ The enter..leave..ret sequence shows fix.opCall is a regular function call.
Aug 01 2004
next sibling parent reply Ben Hinkle <bhinkle4 juno.com> writes:
parabolis wrote:

 Ben Hinkle wrote:
 
 parabolis wrote:
 
 
Ben Hinkle wrote:


operator overloading works for structs, too. I'd try something like
I actually considered that approach but I was worried that I might run into problems similar to cases in C++ in which you have a class instantiated automatically on the stack and attempt to return it but the object is deallocated when it goes out of scope. I was not sure when I started whether what I had in mind would be possible so I decided to work out that issue after I verified the possibility.
Structs are passed by value and the entire struct is a single int so the whole thing should fit in a register and shouldn't be any different than an int.
(I am only asking this because you seen to have compiler knowledge that I am interested in obtaining...) A struct with a single int would work for register passing however in the general case you cannot define returning structs from functions making an assumption that they can be passed via registers. Would a compiler identify instances where register and non-register returns are possible?
I don't know. Somewhere I remember Walter saying if the data fits in registers it is passed in registers.
 
Also it still does not address the issue that the function call
overhear destroys the speed bonus of using fixed vs floating
point does it?
The function calls should be inlined. Since structs don't have dynamic dispatching the exact function is known at compile time. The optimizations that get performed on the inlined fcns is unknown, though. My guess is that it's still faster than floating point. It wouldn't be hard to try adding/multiplying a few hundred thousand floats and fix'es and see how they compare.
That is a good point. Thanks for the reply because I have learned a bit more about compilers. However I do not believe DMD does such optimizations. I compiled the example you provided before in ex.d, compiled and ran: ran obj2asm ex.obj > ex.asm A highlight includes: ========================== ex.asm ========================== <snip> _D5ex3fix6opCallFdZS5ex3fix comdat assume CS:_D5ex43fix6opCallFdZS5ex3fix enter 4,0 push EDI <snip> leave ret 8 <snip> ================================================================ The enter..leave..ret sequence shows fix.opCall is a regular function call.
Hmm. you're right. Even passing in -O -inline -release flags didn't inline much. I modified the code a little to use fix x; x.init(1); instead of fix x = fix(1); and it looks like the init call is inlined. But the opFoo calls aren't.
Aug 01 2004
parent reply parabolis <parabolis softhome.net> writes:
Ben Hinkle wrote:

 parabolis wrote:
 
A highlight includes:
==========================   ex.asm   ==========================
<snip>
_D5ex3fix6opCallFdZS5ex3fix    comdat
         assume    CS:_D5ex43fix6opCallFdZS5ex3fix
         enter    4,0
         push    EDI
<snip>
         leave
         ret    8
<snip>
================================================================

The enter..leave..ret sequence shows fix.opCall is a regular
function call.
Hmm. you're right. Even passing in -O -inline -release flags didn't inline much. I modified the code a little to use fix x; x.init(1); instead of fix x = fix(1); and it looks like the init call is inlined. But the opFoo calls aren't.
So is this suprising enough to consider it a bug which may get fixed?
Aug 01 2004
next sibling parent Sean Kelly <sean f4.ca> writes:
parabolis wrote:
 
 So is this suprising enough to consider it a bug which may get fixed?
Walter's been so busy on core language issues that he's likely given little attention to optimization. It would be nice to have this implemented, but I have a feeling it will be put on the back burner for a while. Sean
Aug 01 2004
prev sibling parent reply Ben Hinkle <bhinkle4 juno.com> writes:
parabolis wrote:

 Ben Hinkle wrote:
 
 parabolis wrote:
 
A highlight includes:
==========================   ex.asm   ==========================
<snip>
_D5ex3fix6opCallFdZS5ex3fix    comdat
         assume    CS:_D5ex43fix6opCallFdZS5ex3fix
         enter    4,0
         push    EDI
<snip>
         leave
         ret    8
<snip>
================================================================

The enter..leave..ret sequence shows fix.opCall is a regular
function call.
Hmm. you're right. Even passing in -O -inline -release flags didn't inline much. I modified the code a little to use fix x; x.init(1); instead of fix x = fix(1); and it looks like the init call is inlined. But the opFoo calls aren't.
So is this suprising enough to consider it a bug which may get fixed?
actually it looks like the functions *do* get inlined if one writes fix x; x = fix(1); fix y; y = fix(2); ... instead of fix x = fix(1); fix y = fix(2); ... Now that I think of it this has come up before - I just forgot when I was writing the original example.
Aug 01 2004
parent reply parabolis <parabolis softhome.net> writes:
Ben Hinkle wrote:
 
 actually it looks like the functions *do* get inlined if one writes
  fix x;
  x = fix(1);
  fix y;
  y = fix(2);
  ...
 instead of
  fix x = fix(1);
  fix y = fix(2);
  ...
 
 Now that I think of it this has come up before - I just forgot when I was
 writing the original example.
Just the two static functions?
Aug 01 2004
parent reply Ben Hinkle <bhinkle4 juno.com> writes:
parabolis wrote:

 Ben Hinkle wrote:
 
 actually it looks like the functions *do* get inlined if one writes
  fix x;
  x = fix(1);
  fix y;
  y = fix(2);
  ...
 instead of
  fix x = fix(1);
  fix y = fix(2);
  ...
 
 Now that I think of it this has come up before - I just forgot when I was
 writing the original example.
Just the two static functions?
sorry - I should have been more clear. If one does not have initializers the calls are inlined. So the rewriting fix z = x+y; as fix z; z = x+y; will inline the addition.
Aug 01 2004
parent reply parabolis <parabolis softhome.net> writes:
Ben Hinkle wrote:

 sorry - I should have been more clear. If one does not have initializers the
 calls are inlined. So the rewriting
  fix z = x+y;
 as
  fix z;
  z = x+y;
 will inline the addition.
I do not think that is right. Check the example I posted to Walter. The short version is that I provided an .asm with a non-inlined call with the following main: ================================================================ struct up4 { uint value; // ... } void main( char[][] args ) { up4 x, y; x.value = 0x8000_0000; // 1/2 y.value = 0x4000_0000; // 1/4 x *= y; // 1/8 } ================================================================
Aug 01 2004
parent reply Ben Hinkle <bhinkle4 juno.com> writes:
parabolis wrote:

 Ben Hinkle wrote:
 
 sorry - I should have been more clear. If one does not have initializers
 the calls are inlined. So the rewriting
  fix z = x+y;
 as
  fix z;
  z = x+y;
 will inline the addition.
I do not think that is right. Check the example I posted to Walter. The short version is that I provided an .asm with a non-inlined call with the following main: ================================================================ struct up4 { uint value; // ... } void main( char[][] args ) { up4 x, y; x.value = 0x8000_0000; // 1/2 y.value = 0x4000_0000; // 1/4 x *= y; // 1/8 } ================================================================
If one removes the asm block from the function it gets inlined. Looks like asm blocks prevent a function from being inlined.
Aug 01 2004
parent parabolis <parabolis softhome.net> writes:
Ben Hinkle wrote:
 
 
 If one removes the asm block from the function it gets inlined. Looks like
 asm blocks prevent a function from being inlined.
Then that is a problem since the asm blockis required to do what I want to do.
Aug 02 2004
prev sibling next sibling parent reply Sean Kelly <sean f4.ca> writes:
parabolis wrote:
 
 That is a good point. Thanks for the reply because I have learned a bit 
 more about compilers. However I do not believe DMD does such 
 optimizations.
Did you try dmd -O blah? Sean
Aug 01 2004
next sibling parent Sean Kelly <sean f4.ca> writes:
Sean Kelly wrote:

 parabolis wrote:
 
 That is a good point. Thanks for the reply because I have learned a 
 bit more about compilers. However I do not believe DMD does such 
 optimizations.
Did you try dmd -O blah?
Oops, rather: dmd -O -inline blah Sean
Aug 01 2004
prev sibling parent parabolis <parabolis softhome.net> writes:
Sean Kelly wrote:

 parabolis wrote:
 
 That is a good point. Thanks for the reply because I have learned a 
 bit more about compilers. However I do not believe DMD does such 
 optimizations.
Did you try dmd -O blah? Sean
Yes. See Ben Hicklke's post above this one.
Aug 01 2004
prev sibling next sibling parent reply "Walter" <newshound digitalmars.com> writes:
"parabolis" <parabolis softhome.net> wrote in message
news:cej4cp$275c$1 digitaldaemon.com...
 That is a good point. Thanks for the reply because I have
 learned a bit more about compilers. However I do not believe DMD
 does such optimizations. I compiled the example you provided
 before in ex.d, compiled and ran:
      ran obj2asm ex.obj > ex.asm
1) compile with -inline to enable inline 2) look at the *call* of the function to see if it was inlined, not the instance of the function
Aug 01 2004
parent parabolis <parabolis softhome.net> writes:
Walter wrote:

 "parabolis" <parabolis softhome.net> wrote in message
 news:cej4cp$275c$1 digitaldaemon.com...
 
That is a good point. Thanks for the reply because I have
learned a bit more about compilers. However I do not believe DMD
does such optimizations. I compiled the example you provided
before in ex.d, compiled and ran:
     ran obj2asm ex.obj > ex.asm
1) compile with -inline to enable inline 2) look at the *call* of the function to see if it was inlined, not the instance of the function
Ok I see what you mean. I did recompile with -O -iniline and -release and the function is still not inlined. I posted the code I am actually using and the offending line from withing main. ================================================================ struct up4 { uint value; <snip> up4 opMulAssign( up4 rhs ) { uint thisVal = this.value; uint rhsVal = rhs.value; asm { // does this work without .value? mov EAX, thisVal; xor EDX, EDX; mul rhsVal; mov thisVal, EDX; } this.value = thisVal; return *this; } <snip> } void main( char[][] args ) { up4 x, y; x.value = 0x8000_0000; // 1/2 y.value = 0x4000_0000; // 1/4 x *= y; // 1/8 } ================================================================ __Dmain comdat assume CS:__Dmain <snip> call near ptr _D5types3up411opMulAssignFS5types3up4ZS5types3up4 <snip> ret __Dmain ends end ================================================================
Aug 01 2004
prev sibling parent "Walter" <newshound digitalmars.com> writes:
"parabolis" <parabolis softhome.net> wrote in message
news:cej4cp$275c$1 digitaldaemon.com...
 Structs are passed by value and the entire struct is a single int so the
 whole thing should fit in a register and shouldn't be any different than
an
 int.
A struct with a single int would work for register passing however in the general case you cannot define returning structs from functions making an assumption that they can be passed via registers. Would a compiler identify instances where register and non-register returns are possible?
Yes, DMD does that.
Aug 01 2004
prev sibling parent reply Jonathan Leffler <jleffler earthlink.net> writes:
parabolis wrote:
 Norbert Nemec wrote:
 parabolis wrote:
 I am just curious why D has no primitive types corresponding to
 fixed point values?
Guess, it's just a rather specialized topic, so it would at best stand in line with thousands of other topics that might be interesting to have. Is there any fundamental problem in implementing it in the library?
I believe that there is. [...] One of the main reasons for defining fixed point primitives would be that they can do non-integer math much faster than floating point.
I'd like to see your justification of this assertion - especially in the context of CPUs with floating point arithmetic units. There is a lot of scaling and shifting involved, and that quickly becomes more time consuming than floating point arithmetic. Here are two links, one to a site with a lot of information about decimal arithmetic (loosely related) and one to a rather famous paper with the title "What Every Computer Scientist Should Know About Floating-Point Arithmetic" (linked from the decimal site at Hursley). The third one is a link from the Hursley site - entitled 'A Calculated Look at Fixed-Point Arithmetic' - and specifically deals with the embedded computer space where floating-point arithmetic is not necessarily available in hardware. Somewhere, I've seen information that discounts fixed point arithmetic (implemented in software) because it does not perform as well as hardware floating point, but I regret I've not found that reference. http://www2.hursley.ibm.com/decimal/ http://www.physics.ohio-state.edu/~dws/grouplinks/floating_point_math.pdf http://www.embedded.com/98/9804fe2.htm Different slant: Which languages apart from Ada (and probably PL/1) support fixed point arithmetic natively - as opposed to in libraries? Is the presence of fixed point arithmetic anything to do with their triumphant (lack of) success in the market, other than as a general symptom of feature bloat? -- Jonathan Leffler #include <disclaimer.h> Email: jleffler earthlink.net, jleffler us.ibm.com Guardian of DBD::Informix v2003.04 -- http://dbi.perl.org/
Aug 02 2004
next sibling parent reply parabolis <parabolis softhome.net> writes:
Jonathan Leffler wrote:
 parabolis wrote:
 One of the main reasons for defining fixed point primitives would be 
 that they can do non-integer math much faster than floating point. 
I'd like to see your justification of this assertion - especially in the context of CPUs with floating point arithmetic units. There is a lot of scaling and shifting involved, and that quickly becomes more time consuming than floating point arithmetic.
That is fair enough. Let me first introduce some terminology to make talking about this stuff simpler. Fixed-point can have many meanings depending on where exactly you fix the point. To simplify I will just assume we are talking about a type with n bits. The point can be fixed by looking at the ratio of integer to fractional bits: n:0 - all integer and no fraction 0:n - no integer and all fraction 1:1 - same number of integer and fractional 2:1 - two integer for every fractional 1:2 - two fractional for every integer i:(n-1) - arbitrary number i of integer (n-1):f - arbitrary number f of fractional Case n:0 - Integers Integer operations are unconditionally faster than floating point operations. There may exist some strange machines with no integer operations in which this fails but that does not affect the general case in which this holds. Case 0:n - Percents These are what I call percent types which are perhaps poorly named but have range from 0<=...1 or -1<...<1 for unsigned and signed versions respectively. These are no different from integer operations because we make can make smart use of the registers (mul and div have 3 operand registers, 2 are normally used and we use the not normally used one to get automatic "shifts." Cases 1:1, 2:1 and 1:2 - Fixed-fixed point Register shifts take constant time of 7 cycles (for x86). The 32-bit integer div takes 38 cycles while the 32-bit floating point div takes 98 cycles. At most we use 4 shifts for a total time of 38+4*7 = 59 < 98. These figures are from: Integer opcodes and cycles http://www.online.ee/%7Eandre/i80386/Opcodes/index.html Floating point opcodes and cycles http://www.tbcnet.com/~clive/knirough.html Cases j:(n-j) and (n-j):j - Floating-fixed point I do not make the case that these should have primitive types. They are not significantly faster than floating point. However I mention the case as a suggestion that any arguments against fixed-point types probably included such types. I believe my argument should convince you that supporting percent types and fixed-fixed point types will in face provide an option that is faster than floating point.
 Here are two links, one to a site with a lot of information about 
 decimal arithmetic (loosely related) and one to a rather famous paper
 with the title "What Every Computer Scientist Should Know About 
 Floating-Point Arithmetic" (linked from the decimal site at Hursley). 
  The third one is a link from the Hursley site - entitled 'A Calculated 
 Look at Fixed-Point Arithmetic' - and specifically deals with the 
 embedded computer space where floating-point arithmetic is not 
I will be espescially interested to read the embedded link. I do in fact write software for the tiny PIC microcontroller which not only does not have floating point but can have as few as 2k instructions which makes implementing multiplication and division along with anything else a rather interesting problem.
 necessarily available in hardware.  Somewhere, I've seen information 
 that discounts fixed point arithmetic (implemented in software) because 
 it does not perform as well as hardware floating point, but I regret 
 I've not found that reference.
I have posted elsewhere in this thread an argument that simply putting wrappers around a hardware implementation is sufficient to kill any speed bonus for using fixed point. So it is trivial to argue that the same wrappers around a software implementation would require a software implementation to be faster than a hardware implementation which will obviously require such a strange set of circumstances to happen that it should not be worth seriously considering.
 
 Which languages apart from Ada (and probably PL/1) support fixed point 
 arithmetic natively - as opposed to in libraries?  Is the presence of 
 fixed point arithmetic anything to do with their triumphant (lack of) 
 success in the market, other than as a general symptom of feature bloat?
I am hoping that D will :) Of course that is an evasion of your argument that by analogy other languages did not need fixed-point so why would D. The short ansewer is that I believe values in [0,1) and (-1,1) are becoming increasingly frequent. Storing a probability will be best server by the 0:n values above.
Aug 02 2004
parent reply Sean Kelly <sean f4.ca> writes:
parabolis wrote:
 
 So it is trivial to argue that the same wrappers around a software 
 implementation would require a software implementation to be faster than 
 a hardware implementation which will obviously require such a strange 
 set of circumstances to happen that it should not be worth seriously 
 considering.
And last time I checked, many modern processors included (limited) fixed point support in hardware. Though from what I've read the additional instructions don't grant much of a speed gain.
 Which languages apart from Ada (and probably PL/1) support fixed point 
 arithmetic natively - as opposed to in libraries?  Is the presence of 
 fixed point arithmetic anything to do with their triumphant (lack of) 
 success in the market, other than as a general symptom of feature bloat?
I am hoping that D will :)
Same here. I've done a bit amount of financial programming in the past, and it's much easier to do this with fixed-point than with floating-point. But how would D support a builtin primitive? Perhaps via properties? ie. fixed x; x.precision=2; I'm not sure I really like this example. How does Ada handle fixed-point numbers? Sean
Aug 02 2004
parent reply parabolis <parabolis softhome.net> writes:
Sean Kelly wrote:

 parabolis wrote:
 
 So it is trivial to argue that the same wrappers around a software 
 implementation would require a software implementation to be faster 
 than a hardware implementation which will obviously require such a 
 strange set of circumstances to happen that it should not be worth 
 seriously considering.
And last time I checked, many modern processors included (limited) fixed point support in hardware. Though from what I've read the additional instructions don't grant much of a speed gain.
Actually any processor supporting shift and integer arithmetic will be sufficient for the fixed point support I am envisioning.
 
 Which languages apart from Ada (and probably PL/1) support fixed 
 point arithmetic natively - as opposed to in libraries?  Is the 
 presence of fixed point arithmetic anything to do with their 
 triumphant (lack of) success in the market, other than as a general 
 symptom of feature bloat?
I am hoping that D will :)
Same here. I've done a bit amount of financial programming in the past, and it's much easier to do this with fixed-point than with floating-point. But how would D support a builtin primitive? Perhaps via properties? ie. fixed x; x.precision=2; I'm not sure I really like this example. How does Ada handle fixed-point numbers?
The properties you are suggesting are to set the position of the fixed point. That would be an example of the floating-fixed point that I was saying probably is not worth implementing. To get performance gains you have to live with built in types like this: i32f0 - aka int i24f8 - int with 8 fractional bits i16f16 - int with 16 fractional bits i8f24 - int with 24 fractional bits i0f32 - int with 32 fractional bits Then of course you have signed and unsigned variants. I am assuming D would have to pick the subset that is of the most value. I would only define types with the same number of int and fractional bits: The 32 bit versions: sp4 - signed percent - 4 bytes - no int bytes up4 - unsigned percent - 4 bytes - no uint bytes sfx4 - fixed point - 4 bytes - 2 int bytes ufx4 - fixed point - 4 bytes - 2 uint bytes Again any better name for what I mean by percent is more than welcome!
Aug 02 2004
parent reply Arcane Jill <Arcane_member pathlink.com> writes:
In article <celu1u$8t9$1 digitaldaemon.com>, parabolis says...

Again any better name for what I mean by percent is more than 
welcome!
"fraction" is an obvious choice. Jill
Aug 02 2004
parent reply parabolis <parabolis softhome.net> writes:
Arcane Jill wrote:

 In article <celu1u$8t9$1 digitaldaemon.com>, parabolis says...
 
 
Again any better name for what I mean by percent is more than 
welcome!
"fraction" is an obvious choice.
I tried that out on myself but the most important thing about these values is the range over which they are defined. Fractions are common outside this range (ie 23/7). I am certain somebody somewhere has found a better name. People have been using that range since Babolonian times from whence we inherit the 360 which attempts to be divisible by anything you have in mind and can ergo use whole number "percents."
Aug 02 2004
parent reply Arcane Jill <Arcane_member pathlink.com> writes:
In article <cem10k$a5l$1 digitaldaemon.com>, parabolis says...
Arcane Jill wrote:

 "fraction" is an obvious choice.
 
I tried that out on myself but the most important thing about these values is the range over which they are defined. Fractions are common outside this range (ie 23/7).
I meant it in the sense that a real number has an integer part and a fractional part. In meant in the sense of the C function frac(). Fractional parts are always between 0 and 1 (or, I guess, -1 and 1). Maybe then I should have said "fractional", rather than "fraction". I guess "proper fraction" is way too verbose!? Jill
Aug 02 2004
parent parabolis <parabolis softhome.net> writes:
Arcane Jill wrote:

 In article <cem10k$a5l$1 digitaldaemon.com>, parabolis says...
 
Arcane Jill wrote:


"fraction" is an obvious choice.
I tried that out on myself but the most important thing about these values is the range over which they are defined. Fractions are common outside this range (ie 23/7).
I meant it in the sense that a real number has an integer part and a fractional part. In meant in the sense of the C function frac(). Fractional parts are always between 0 and 1 (or, I guess, -1 and 1). Maybe then I should have said "fractional", rather than "fraction". I guess "proper fraction" is way too verbose!?
Perhaps fracp meaning any of: fractoinal part, fractional portion or fractional, proper?
Aug 02 2004
prev sibling parent reply pragma <EricAnderton at yahoo dot com> <pragma_member pathlink.com> writes:
In article <ceks4g$2sc6$1 digitaldaemon.com>, Jonathan Leffler says...
Here are two links, one to a site with a lot of information about 
decimal arithmetic (loosely related) and one to a rather famous paper
with the title "What Every Computer Scientist Should Know About 
Floating-Point Arithmetic" (linked from the decimal site at Hursley). 

 [snip]
Thank you John, for these links. They're a great read. :) I haven't seen anything on this topic since I was hand-coding arithmetic in ASM to speed up graphics code on my 286. No math co-processor, so floating point calculations were pretty much out of the question. Even integer multipulcation would cost you plenty of precious cycles. Simple byte/byte ratios usually did the trick for crude 3d matrix operations. Everything else relied on pre-calculated tables for sin() and cos() values... those were the days. Nostalgia aside, I think this kind of arithmetic could have some very impressive benefits for D. Promoting D as a language that supports "perfect precision" for financial and scientific programming comes to mind. For those of you that have doubts that this is useful, I urge you to take a look at the first site that Jonathan linked: http://www2.hursley.ibm.com/decimal/ "Most computers today support binary floating-point in hardware. While suitable for many purposes, binary floating-point arithmetic should not be used for financial, commercial, and user-centric applications or web services because the decimal data used in these applications cannot be represented exactly using binary floating-point." This is backed up in the FAQ on the site: http://www2.hursley.ibm.com/decimal/decifaq1.html#inexact The implementation specification they have put forward is for Java (of course), but could easily be done with D. What is important is that it supports "±0, ±Infinity, and Not-a-Number (NaN)" which would allow it to inter-operate with D's floating point and complex types rather nicely. http://www2.hursley.ibm.com/decimal/dbspec.html http://www2.hursley.ibm.com/decimal/decbits.html - Pragma
Aug 02 2004
next sibling parent reply parabolis <parabolis softhome.net> writes:
pragma <EricAnderton at yahoo dot com> wrote:

 In article <ceks4g$2sc6$1 digitaldaemon.com>, Jonathan Leffler says...
 
Here are two links, one to a site with a lot of information about 
decimal arithmetic (loosely related) and one to a rather famous paper
with the title "What Every Computer Scientist Should Know About 
Floating-Point Arithmetic" (linked from the decimal site at Hursley). 
I haven't seen anything on this topic since I was hand-coding arithmetic in ASM to speed up graphics code on my 286. No math co-processor, so floating point calculations were pretty much out of the question. Even integer multipulcation would cost you plenty of precious cycles. Simple byte/byte ratios usually did the trick for crude 3d matrix operations. Everything else relied on pre-calculated tables for sin() and cos() values... those were the days.
I am intrigued by these tables. Were values really stored for the whole range of both functions or did they just store a quarter of sin's values and infer the rest from that? (I am still looking through the rest of your links to base10 math.)
Aug 02 2004
parent reply pragma <EricAnderton at yahoo dot com> <pragma_member pathlink.com> writes:
In article <celv5s$9ag$1 digitaldaemon.com>, parabolis says...
pragma <EricAnderton at yahoo dot com> wrote:
 I haven't seen anything on this topic since I was hand-coding arithmetic in ASM
 to speed up graphics code on my 286.  No math co-processor, so floating point
 calculations were pretty much out of the question.  Even integer multipulcation
 would cost you plenty of precious cycles.  Simple byte/byte ratios usually did
 the trick for crude 3d matrix operations.  Everything else relied on
 pre-calculated tables for sin() and cos() values... those were the days.
I am intrigued by these tables. Were values really stored for the whole range of both functions or did they just store a quarter of sin's values and infer the rest from that?
The sin() and cos() tables? At first I wound up preprocessing the entire range of sin() and cos() just because your lookup was a drop-in replacement for the original function (more or less). After a while, I got used to working with quarter-tables for sin() only as that quarter-arc is really all you need. Ultimately, you wind up doing a mixture of both depending on what suits the *use* of the function and what constants you've managed to optimize away as well. Another trick is to map the range of values from the function (0 to 1) and map that to the range of a byte (0 to 256). This gets you 8-bit, fixed-point precision for all your calculations... just divide the result by 256 to get a close answer in real units. I suppose a good analogy for this would be to writing a spreadsheet program in terms of cents instead of dollars. It was a valuable lesson in how to optimize away something that kills performance so dramatically (i.e. software floating-point support on an 8mhz machine). In the case of writing a "sinus scroller", you would blit pixels out to screen locations based on the phase of a sine wave. The solution for pixel locations was almost a function of some looping index, so you just solve for those ahead of time either within that program (in C) or using a precalculated inc file (generated from a C program).
(I am still looking through the rest of your links to base10 math.)
It's nifty stuff, isn't it? - Pragma
Aug 02 2004
next sibling parent reply parabolis <parabolis softhome.net> writes:
pragma <EricAnderton at yahoo dot com> wrote:

 In article <celv5s$9ag$1 digitaldaemon.com>, parabolis says...
 
pragma <EricAnderton at yahoo dot com> wrote:

I haven't seen anything on this topic since I was hand-coding arithmetic in ASM
to speed up graphics code on my 286.  No math co-processor, so floating point
calculations were pretty much out of the question.  Even integer multipulcation
would cost you plenty of precious cycles.  Simple byte/byte ratios usually did
the trick for crude 3d matrix operations.  Everything else relied on
pre-calculated tables for sin() and cos() values... those were the days.
I am intrigued by these tables. Were values really stored for the whole range of both functions or did they just store a quarter of sin's values and infer the rest from that?
The sin() and cos() tables? At first I wound up preprocessing the entire range of sin() and cos() just because your lookup was a drop-in replacement for the original function (more or less). After a while, I got used to working with quarter-tables for sin() only as that quarter-arc is really all you need. Ultimately, you wind up doing a mixture of both depending on what suits the *use* of the function and what constants you've managed to optimize away as well.
Ah yes I have seen preprocessing used to deal with other similar cases. I have found myself using much abbreviated sin/cos/etc functions that have maybe 4-8 cases in a switch table. Now it looks like I might have to write an integer implementation which I have been guessing will just define sin using a quarter table and define cos in terms of sin. Then the rest of the trig functions in terms of these two. It heartens me to hear that my inclination is not without precedence :)
 
 Another trick is to map the range of values from the function (0 to 1) and map
 that to the range of a byte (0 to 256).  This gets you 8-bit, fixed-point
 precision for all your calculations... just divide the result by 256 to get a
 close answer in real units.  I suppose a good analogy for this would be to
 writing a spreadsheet program in terms of cents instead of dollars.
That is a nifty trick.
 
 It was a valuable lesson in how to optimize away something that kills
 performance so dramatically (i.e. software floating-point support on an 8mhz
 machine).  In the case of writing a "sinus scroller", you would blit pixels out
 to screen locations based on the phase of a sine wave.  The solution for pixel
 locations was almost a function of some looping index, so you just solve for
 those ahead of time either within that program (in C) or using a precalculated
 inc file (generated from a C program).
 
 
(I am still looking through the rest of your links to base10 math.)
It's nifty stuff, isn't it?
They do have a strong argument that converting perfectly from decimal to binary and back is not possible. I am actually more fond of binary than decimal myself. I mean when you think of memorizing the multiplication tables how could it be simpler? My prefered solution would be to change the world so everybody learns to use the octal system. However given the speed with which the metric system caught on in the US I doubt the feasibility of such a solution. So computers should probably be able to deal with decimal. I think I am convinced that a decimal type would be useful. Suporrting an IEEE type would have the benefit that hardware support ought to be expected. I am not sure it would be worthwhile to support until there is hardware support... (which there may be... I know there is a BCD type that the x86 knows about)
Aug 02 2004
parent ashlee <ashlee_member pathlink.com> writes:
In article <cemaik$edf$1 digitaldaemon.com>, parabolis says...
pragma <EricAnderton at yahoo dot com> wrote:

 In article <celv5s$9ag$1 digitaldaemon.com>, parabolis says...
 
pragma <EricAnderton at yahoo dot com> wrote:

I haven't seen anything on this topic since I was hand-coding arithmetic in ASM
to speed up graphics code on my 286.  No math co-processor, so floating point
calculations were pretty much out of the question.  Even integer multipulcation
would cost you plenty of precious cycles.  Simple byte/byte ratios usually did
the trick for crude 3d matrix operations.  Everything else relied on
pre-calculated tables for sin() and cos() values... those were the days.
I am intrigued by these tables. Were values really stored for the whole range of both functions or did they just store a quarter of sin's values and infer the rest from that?
The sin() and cos() tables? At first I wound up preprocessing the entire range of sin() and cos() just because your lookup was a drop-in replacement for the original function (more or less). After a while, I got used to working with quarter-tables for sin() only as that quarter-arc is really all you need. Ultimately, you wind up doing a mixture of both depending on what suits the *use* of the function and what constants you've managed to optimize away as well.
Ah yes I have seen preprocessing used to deal with other similar cases. I have found myself using much abbreviated sin/cos/etc functions that have maybe 4-8 cases in a switch table. Now it looks like I might have to write an integer implementation which I have been guessing will just define sin using a quarter table and define cos in terms of sin. Then the rest of the trig functions in terms of these two. It heartens me to hear that my inclination is not without precedence :)
 
 Another trick is to map the range of values from the function (0 to 1) and map
 that to the range of a byte (0 to 256).  This gets you 8-bit, fixed-point
 precision for all your calculations... just divide the result by 256 to get a
 close answer in real units.  I suppose a good analogy for this would be to
 writing a spreadsheet program in terms of cents instead of dollars.
That is a nifty trick.
 
 It was a valuable lesson in how to optimize away something that kills
 performance so dramatically (i.e. software floating-point support on an 8mhz
 machine).  In the case of writing a "sinus scroller", you would blit pixels out
 to screen locations based on the phase of a sine wave.  The solution for pixel
 locations was almost a function of some looping index, so you just solve for
 those ahead of time either within that program (in C) or using a precalculated
 inc file (generated from a C program).
 
 
(I am still looking through the rest of your links to base10 math.)
It's nifty stuff, isn't it?
They do have a strong argument that converting perfectly from decimal to binary and back is not possible. I am actually more fond of binary than decimal myself. I mean when you think of memorizing the multiplication tables how could it be simpler? My prefered solution would be to change the world so everybody learns to use the octal system. However given the speed with which the metric system caught on in the US I doubt the feasibility of such a solution. So computers should probably be able to deal with decimal. I think I am convinced that a decimal type would be useful. Suporrting an IEEE type would have the benefit that hardware support ought to be expected. I am not sure it would be worthwhile to support until there is hardware support... (which there may be... I know there is a BCD type that the x86 knows about)
May 17 2005
prev sibling parent reply Stephen Waits <steve waits.net> writes:
pragma <EricAnderton at yahoo dot com> wrote:
 In article <celv5s$9ag$1 digitaldaemon.com>, parabolis says...
 
 The sin() and cos() tables?
Lookup tables should actually be slower on just about any modern system. And, as mentioned above, fixed point isn't useful in terms of performance any more. It's only something you'd use if you need exact precision. Someone's already written a bigint library which might fit your needs here. --Steve
Aug 03 2004
next sibling parent reply Arcane Jill <Arcane_member pathlink.com> writes:
In article <ceoi6l$1dmf$1 digitaldaemon.com>, Stephen Waits says...
Someone's already written a bigint library which might fit 
your needs here.
Er. What? That was I. (class Int in dsource/Demios). However, I don't see how that would help in this case. I mean, you mentioned:
 The sin() and cos() tables?
Big integers => big tables. Silly, I think. The originally mentioned lookup tables had byte-sized entries, which are not very big at all. (OFF TOPIC: I'm too busy for it right now, but one day, purely for academic interest, I'd like to implement the class Real (unlimited precision real numbers). Getting sin() and cos() working for those would presumably involve infinite series - yikes).
Lookup tables should actually be slower on just about any modern system.
I've heard people say this, and I find it staggeringly hard to believe. Of course, I know nothing about this area, but here are two bits of code: versus Now tell me, how can the second one /possibly/ be faster than the first? (Genuine question. I'm not suggesting you're wrong, I just don't understand how).
Aug 03 2004
next sibling parent reply parabolis <parabolis softhome.net> writes:
Arcane Jill wrote:

 In article <ceoi6l$1dmf$1 digitaldaemon.com>, Stephen Waits says...
 
Someone's already written a bigint library which might fit 
your needs here.
Er. What? That was I. (class Int in dsource/Demios). However, I don't see how that would help in this case. I mean, you mentioned:
The sin() and cos() tables?
Big integers => big tables. Silly, I think. The originally mentioned lookup tables had byte-sized entries, which are not very big at all.
They might make handy shortcuts for 1, 0 and -1?
 
 (OFF TOPIC: I'm too busy for it right now, but one day, purely for academic
 interest, I'd like to implement the class Real (unlimited precision real
 numbers). Getting sin() and cos() working for those would presumably involve
 infinite series - yikes).
I remember attending a talk by a guy from a University in Florida who explained the nifty means by which calculators implement these operations. I believe it might be worthwhile to look try to find him. I am sure more information which University and which guy would help but alas I cannot remember either...
 
 
Lookup tables should actually be slower on just about any modern system.
I've heard people say this, and I find it staggeringly hard to believe. Of course, I know nothing about this area, but here are two bits of code: versus Now tell me, how can the second one /possibly/ be faster than the first? (Genuine question. I'm not suggesting you're wrong, I just don't understand how).
Should the trig functions be used infrequently then obtaining data from a memory page on disk will force a context switch whose overhead can probably be counted against the efficiency of looking something up in a table. (The switch would not have occured if you had not used a table). I would not count the time it takes to retrieve the page from disk however because technically your process/thread is not running at the time.
Aug 03 2004
parent reply pragma <EricAnderton at yahoo dot com> <pragma_member pathlink.com> writes:
In article <ceokl2$1f02$1 digitaldaemon.com>, parabolis says...
Arcane Jill wrote:
 
 (OFF TOPIC: I'm too busy for it right now, but one day, purely for academic
 interest, I'd like to implement the class Real (unlimited precision real
 numbers). Getting sin() and cos() working for those would presumably involve
 infinite series - yikes).
I remember attending a talk by a guy from a University in Florida who explained the nifty means by which calculators implement these operations. I believe it might be worthwhile to look try to find him. I am sure more information which University and which guy would help but alas I cannot remember either...
This really interested me, so I thought I'd post my findings on the topic. I couldn't find any particular professor on the topic, but the algorithm is pretty well documented online. Apparently, most calculators use CORDIC (COordinate Rotation DIgital Computer) as an algorithm for computing transcendental functions. A CORDIC FAQ I found online really drove home how useful this could be: "CORDIC revolves around the idea of "rotating" the phase of a complex number, by multiplying it by a succession of constant values. However, the "multiplies" can all be powers of 2, so in binary arithmetic they can be done using just shifts and adds; no actual "multiplier" is needed." (From http://www.dspguru.com/info/faqs/cordic.htm ) There's even a C lib linked on the page: http://www.dspguru.com/sw/lib/cordic_1-0.zip - Pragma
Aug 03 2004
parent parabolis <parabolis softhome.net> writes:
pragma <EricAnderton at yahoo dot com> wrote:

 In article <ceokl2$1f02$1 digitaldaemon.com>, parabolis says...
 
Arcane Jill wrote:

(OFF TOPIC: I'm too busy for it right now, but one day, purely for academic
interest, I'd like to implement the class Real (unlimited precision real
numbers). Getting sin() and cos() working for those would presumably involve
infinite series - yikes).
I remember attending a talk by a guy from a University in Florida who explained the nifty means by which calculators implement these operations. I believe it might be worthwhile to look try to find him. I am sure more information which University and which guy would help but alas I cannot remember either...
This really interested me, so I thought I'd post my findings on the topic. I couldn't find any particular professor on the topic, but the algorithm is pretty well documented online. Apparently, most calculators use CORDIC (COordinate Rotation DIgital Computer) as an algorithm for computing transcendental functions. A CORDIC FAQ I found online really drove home how useful this could be:
Now that rings a bell. That was excactly what I was talking about. :)
 
 "CORDIC revolves around the idea of "rotating" the phase of a complex number,
by
 multiplying it by a succession of constant values. However, the "multiplies"
can
 all be powers of 2, so in binary arithmetic they can be done using just shifts
 and adds; no actual "multiplier" is needed."
 
 (From http://www.dspguru.com/info/faqs/cordic.htm )
 
 There's even a C lib linked on the page:
 
 http://www.dspguru.com/sw/lib/cordic_1-0.zip
 
Thanks for the link.
Aug 03 2004
prev sibling parent reply pragma <EricAnderton at yahoo dot com> <pragma_member pathlink.com> writes:
In article <ceojan$1e9o$1 digitaldaemon.com>, Arcane Jill says...
(OFF TOPIC: I'm too busy for it right now, but one day, purely for academic
interest, I'd like to implement the class Real (unlimited precision real
numbers). Getting sin() and cos() working for those would presumably involve
infinite series - yikes).
I'm going to ignore that huge "OT" you put in there. :) That BigInt class is a work of art. If it's going to have a sister-class (BigReal?) one of these days, that would be fantastic. I'd suspect that an infinite-series calculation would be done like your test for primality: with a parameter specifying a number of iterations? Come to think of it, the same could be done for Pi. ;) - Pragma
Aug 03 2004
next sibling parent reply Daniel Horn <hellcatv hotmail.com> writes:
I wrote a sister class: Rational
to have real would require infinite series and so forth...and be impractical
but rational is a template class on a type, presumably BigInt or some 
other type (like long or cent)

you can nab it
http://svn.dsource.org/svn/projects/deimos
 http://svn.dsource.org/svn/projects/deimos/trunk/etc/bigrational.d
enjoy pragma <EricAnderton at yahoo dot com> wrote:
 In article <ceojan$1e9o$1 digitaldaemon.com>, Arcane Jill says...
 
(OFF TOPIC: I'm too busy for it right now, but one day, purely for academic
interest, I'd like to implement the class Real (unlimited precision real
numbers). Getting sin() and cos() working for those would presumably involve
infinite series - yikes).
I'm going to ignore that huge "OT" you put in there. :) That BigInt class is a work of art. If it's going to have a sister-class (BigReal?) one of these days, that would be fantastic. I'd suspect that an infinite-series calculation would be done like your test for primality: with a parameter specifying a number of iterations? Come to think of it, the same could be done for Pi. ;) - Pragma
Aug 03 2004
next sibling parent reply Arcane Jill <Arcane_member pathlink.com> writes:
In article <ceon77$1g39$1 digitaldaemon.com>, Daniel Horn says...
I wrote a sister class: Rational
which is very well done.
to have real would require infinite series and so forth...and be impractical
but rational is a template class on a type, presumably BigInt or some 
other type (like long or cent)
or even Int. :)
 http://svn.dsource.org/svn/projects/deimos/trunk/etc/bigrational.d
In the next re-organization of Deimos, in a couple of weeks' time, I'll be creating a new package called etc.math, basically in order to tidy up the root. I'll be moving etc.prime into it (so "import etc.prime" will have to change to either "import etc.math.prime" or "import etc.math.math". When that's done, would you be willing to move your module from etc.bigrational to etc.math.bigrational (or even etc.math.rational)? It's your class, so it's totally up to you, of course. Jill
Aug 03 2004
parent Daniel Horn <hellcatv hotmail.com> writes:
Arcane Jill wrote:
 In article <ceon77$1g39$1 digitaldaemon.com>, Daniel Horn says...
 
I wrote a sister class: Rational
which is very well done.
to have real would require infinite series and so forth...and be impractical
but rational is a template class on a type, presumably BigInt or some 
other type (like long or cent)
or even Int. :)
http://svn.dsource.org/svn/projects/deimos/trunk/etc/bigrational.d
In the next re-organization of Deimos, in a couple of weeks' time, I'll be creating a new package called etc.math, basically in order to tidy up the root. I'll be moving etc.prime into it (so "import etc.prime" will have to change to either "import etc.math.prime" or "import etc.math.math". When that's done, would you be willing to move your module from etc.bigrational to etc.math.bigrational (or even etc.math.rational)? It's your class, so it's totally up to you, of course.
sure... I'd be happy to follow suit :-) If it's convenient you can go ahead and move it along with the rest..but it may not be so simple due to the constructor template types that must be moved
 
 Jill
 
 
 
Aug 03 2004
prev sibling parent reply Ben Hinkle <bhinkle4 juno.com> writes:
At risk of plugging GMP yet again, I recommend checking out GMP's rational
(mpq) and arbitrary-precision floating point (mpf and mpfr) C interface and
my light D wrappers for it. The mpfr code has sin and cos routines as well
as a bunch of other useful functions.
Info about GMP can be found at http://www.swox.com/gmp. Info about my mpq
wrapper is at
 http://home.comcast.net/~benhinkle/gmp-d/doc/classmpq.html

Daniel Horn wrote:

 I wrote a sister class: Rational
 to have real would require infinite series and so forth...and be
 impractical but rational is a template class on a type, presumably BigInt
 or some other type (like long or cent)
 
 you can nab it
 http://svn.dsource.org/svn/projects/deimos
 http://svn.dsource.org/svn/projects/deimos/trunk/etc/bigrational.d
enjoy pragma <EricAnderton at yahoo dot com> wrote:
 In article <ceojan$1e9o$1 digitaldaemon.com>, Arcane Jill says...
 
(OFF TOPIC: I'm too busy for it right now, but one day, purely for
academic interest, I'd like to implement the class Real (unlimited
precision real numbers). Getting sin() and cos() working for those would
presumably involve infinite series - yikes).
I'm going to ignore that huge "OT" you put in there. :) That BigInt class is a work of art. If it's going to have a sister-class (BigReal?) one of these days, that would be fantastic. I'd suspect that an infinite-series calculation would be done like your test for primality: with a parameter specifying a number of iterations? Come to think of it, the same could be done for Pi. ;) - Pragma
Aug 03 2004
next sibling parent parabolis <parabolis softhome.net> writes:
Ben Hinkle wrote:

 At risk of plugging GMP yet again, I recommend checking out GMP's rational
 (mpq) and arbitrary-precision floating point (mpf and mpfr) C interface and
 my light D wrappers for it. The mpfr code has sin and cos routines as well
 as a bunch of other useful functions.
 Info about GMP can be found at http://www.swox.com/gmp. Info about my mpq
 wrapper is at
  http://home.comcast.net/~benhinkle/gmp-d/doc/classmpq.html
 
I was not aware that you plugged it before. That is a great library to have around. :) Thanks.
Aug 03 2004
prev sibling parent Arcane Jill <Arcane_member pathlink.com> writes:
In article <cep8he$1okt$1 digitaldaemon.com>, Ben Hinkle says...
At risk of plugging GMP yet again,
No-one disputes that GMP is brilliant, Ben, but it's not D, it's C. Someone once (unkindly) described D as "just another way to call C functions". If that's all D is going to be, then there's no point to it. A lot of us want things /native/. Besides which - one reason why I chose to write Int rather than use GMP for my crypto project, is that GMP ints can't be made to wipe their content after use - very bad for a security application. If we write our own, it will do what we want it to do. Arcane Jill
Aug 03 2004
prev sibling parent Arcane Jill <Arcane_member pathlink.com> writes:
In article <ceol8t$1f6s$1 digitaldaemon.com>, pragma <EricAnderton at yahoo dot
com> says...

That BigInt class is a work of art.
Wow. Thank you. I do appreciate the comment. In the interests of not confusing people, however, I should mention that the name of the class is actually Int (not BigInt). I believe that "big" is superfluous - I'm a mathematician at heart, and, as a mathematician, I know that a true mathematical integer /has/ unlimited precision. I figured the capital "I" would suffice
If it's going to have a sister-class
(BigReal?) one of these days, that would be fantastic.  
If it were to have such a sister class, that class would simply be called Real. Alas, however, I was merely daydreaming. While I am fascinated by the possibility, I've got too much else to do. I imagine though that it shouldn't be too hard (for someone else) to do at least the basic arithmetic operations. It could in priniciple be implemented as an { int exponent, Int mantissa } pair, I guess.
I'd suspect that an infinite-series calculation would be done like your test for
primality: with a parameter specifying a number of iterations?  
More probably a parameter (or class member variable) specifying the required number of significant bits. I think a Chebyshev approximation would require one fewer iteration than a Taylor series, but it's been a while since I last looked into that sort of thing.
Come to think of it, the same could be done for Pi. ;)
Yes, if you had a class Real, you could indeed request pi to a million places, and expect the class to represent it. Now ... here's another completely mad thought. If, instead of implementing sin(Real), you were to implement the template function sin(T), where T was any type for which pow(T,n) (n an integer) and T*k (k a scalar) is defined, then you'd automatically end up with sin(Real) as just one template specialization among many. Other specializations you'd get for free would include sin(complex), sin(Complex), sin(SquareMatrix) - oh yes, the possibilities are both endless and completely pointless. But fun. :)
Aug 03 2004
prev sibling next sibling parent parabolis <parabolis softhome.net> writes:
Stephen Waits wrote:
 pragma <EricAnderton at yahoo dot com> wrote:
 
 In article <celv5s$9ag$1 digitaldaemon.com>, parabolis says...

 The sin() and cos() tables?
Lookup tables should actually be slower on just about any modern system.
That is probably a valid point in certain circumstances.
 
 And, as mentioned above, fixed point isn't useful in terms of 
 performance any more.  It's only something you'd use if you need exact 
 precision.  Someone's already written a bigint library which might fit 
 your needs here.
I am curious how you would use floating point to represent variables that range from 0..1 or -1..1 and do it faster than integer operations. I am under the impression you will have to ensure the exponent stays within defined bounds for every operation?
Aug 03 2004
prev sibling next sibling parent reply pragma <EricAnderton at yahoo dot com> <pragma_member pathlink.com> writes:
In article <ceoi6l$1dmf$1 digitaldaemon.com>, Stephen Waits says...
pragma <EricAnderton at yahoo dot com> wrote:
 In article <celv5s$9ag$1 digitaldaemon.com>, parabolis says...
 
 The sin() and cos() tables?
Lookup tables should actually be slower on just about any modern system.
I'll have to both agree and disagree with your statement here. I agree that sin/cos lookups are hardly needed anymore on modern hardware since you're no longer worried about an fmul taking 27 clock cycles to complete. Also, the potential cache-miss for accessing the table might well override any advantage in side-stepping such math. But I don't agree that a lookup scheme for calculations is something wholly without merit, as there are plenty of examples of applications out there today that do this sort of pre-calculation. The difference is that the scale of pre-optimization has risen to take advantage of larger memory spaces than were available before. Plus there's still plenty of embedded systems out there that use slower chips and run with far less ram than our PC's.
And, as mentioned above, fixed point isn't useful in terms of 
performance any more.  It's only something you'd use if you need exact 
precision.  
Agreed. My tangent there on lookup tables was really more a clarfication on my nostalgic fit: there's a certain art (and sport!) to navigating restrictive programming environments. And I really do feel that a lightweight 'fixed' class or type might deliver that kind of exact precision in a way that meshes well with D's goals and objectives.
Someone's already written a bigint library which might fit 
your needs here.
Yep. Arcane Jill's class comes pretty close to the mark. All it really needs is a fractional portion or exponent and a way to perform math without loss of data. - Pragma
Aug 03 2004
next sibling parent reply parabolis <parabolis softhome.net> writes:
pragma <EricAnderton at yahoo dot com> wrote:

 In article <ceoi6l$1dmf$1 digitaldaemon.com>, Stephen Waits says...
 
pragma <EricAnderton at yahoo dot com> wrote:

In article <celv5s$9ag$1 digitaldaemon.com>, parabolis says...

The sin() and cos() tables?
Lookup tables should actually be slower on just about any modern system.
I'll have to both agree and disagree with your statement here. I agree that sin/cos lookups are hardly needed anymore on modern hardware since you're no longer worried about an fmul taking 27 clock cycles to complete. Also, the potential cache-miss for accessing the table might well override any advantage in side-stepping such math. But I don't agree that a lookup scheme for calculations is something wholly without merit, as there are plenty of examples of applications out there today that do this sort of pre-calculation. The difference is that the scale of pre-optimization has risen to take advantage of larger memory spaces than were available before. Plus there's still plenty of embedded systems out there that use slower chips and run with far less ram than our PC's.
I not only agree but also believe that these smaller chips will become relatively more ubiquitous than they have been in the past. I am still debating how useful D will be in that arena. Any ideas?
 
And, as mentioned above, fixed point isn't useful in terms of 
performance any more.  It's only something you'd use if you need exact 
precision.  
Agreed. My tangent there on lookup tables was really more a clarfication on my nostalgic fit: there's a certain art (and sport!) to navigating restrictive programming environments. And I really do feel that a lightweight 'fixed' class or type might deliver that kind of exact precision in a way that meshes well with D's goals and objectives.
Ok I have a question for you. How would you implement the 32-bit data type that ranges from 0..1 or -1..1 (a type I am calling either percent/upercent or frap/ufrap unless a better suggestion is made)?
Aug 03 2004
parent reply pragma <EricAnderton at yahoo dot com> <pragma_member pathlink.com> writes:
In article <ceomcm$1fpk$1 digitaldaemon.com>, parabolis says...
pragma <EricAnderton at yahoo dot com> wrote:
 Plus there's still plenty of
 embedded systems out there that use slower chips and run with far less ram than
 our PC's.
I not only agree but also believe that these smaller chips will become relatively more ubiquitous than they have been in the past. I am still debating how useful D will be in that arena. Any ideas?
Let me first preface this by saying that I am not an embedded systems engineer, but I have done some reading on the topic. I think D has a few obstacles to overcome for embedded programming, as was mentioned here some time ago. First, we need a "phobos lite" runtime that doesn't contain anything os specific: just type support and the gc. Second, any dev kits or I/O libs will need to be rewritten in D or made linkable via C. Thankfully, I don't think targeting any given processor should prove impossible as GCC has plenty of portability to use. Just install the DMD frontend and off you go. Those aside, D has everything else that will make it succeed where other technologies (*coughjavacough*) were supposed to reach ubiquity. It has everything C++ has without all the cruft, plus an _optional_ gc. And it compiles down to really small binaries. In essence, it will scale to fit just about anything. The only other thing that would really get D embedded development going would be a really solid dev-product offering that is bundled with a D compiler. Google for "OOpic" or "Basic Stamp" and you'll see what I mean.
Ok I have a question for you. How would you implement the 32-bit 
data type that ranges from 0..1 or -1..1 (a type I am calling 
either percent/upercent or frap/ufrap unless a better suggestion 
is made)?
Probably the most general purpose type would be a simple ratio of two numbers called Fixed or Ratio. class Ratio{ short top, bottom; } This would cover all the cases that a floating-point type cannot including coping with irrational numbers like 1/3. It's also relatively small, and aligns on a 4-byte boundary which is important for memory conservation with small types. The implementation would also be dead-simple as any third-grader could tell you what kind of arithmetic to use. This is a good thing as anybody's first stab at this is likely to be pretty close to optimal. - Pragma
Aug 03 2004
next sibling parent reply Arcane Jill <Arcane_member pathlink.com> writes:
In article <ceoqgo$1i0b$1 digitaldaemon.com>, pragma <EricAnderton at yahoo dot
com> says...

... irrational numbers like 1/3.
I don't think you meant to say that! :) Jill
Aug 03 2004
parent pragma <EricAnderton at yahoo dot com> <pragma_member pathlink.com> writes:
In article <ceore3$1ikr$1 digitaldaemon.com>, Arcane Jill says...
In article <ceoqgo$1i0b$1 digitaldaemon.com>, pragma <EricAnderton at yahoo dot
com> says...

... irrational numbers like 1/3.
I don't think you meant to say that! :)
D'oh. :( You would be right. - Pragma
Aug 03 2004
prev sibling parent reply parabolis <parabolis softhome.net> writes:
pragma <EricAnderton at yahoo dot com> wrote:

 In article <ceomcm$1fpk$1 digitaldaemon.com>, parabolis says...
 
pragma <EricAnderton at yahoo dot com> wrote:

Plus there's still plenty of
embedded systems out there that use slower chips and run with far less ram than
our PC's.
I not only agree but also believe that these smaller chips will become relatively more ubiquitous than they have been in the past. I am still debating how useful D will be in that arena. Any ideas?
Let me first preface this by saying that I am not an embedded systems engineer, but I have done some reading on the topic. I think D has a few obstacles to overcome for embedded programming, as was mentioned here some time ago. First, we need a "phobos lite" runtime that doesn't contain anything os specific: just type support and the gc. Second, any dev kits or I/O libs will need to be rewritten in D or made linkable via C. Thankfully, I don't think targeting any given processor should prove impossible as GCC has plenty of portability to use. Just install the DMD frontend and off you go.
However I believe DMD is designed to support MS COM which looks like a less than optimal design coming from a minimal memory standpoint. I admit I laughed at the OOpic mention you made later as some pics have less than a hundred bytes of memory. Not exactly the kind of environment where you will want objects to specialise each other... However this is an extreme case and I believe OO could be useful for microcontrollers/small processors. Actually the more I think about it the bit[] construct in D will have unlimited value for smaller applicatoins. Especially for directly mapping a bit[] onto a register that deals with pins.
 
 Those aside, D has everything else that will make it succeed where other
 technologies (*coughjavacough*) were supposed to reach ubiquity.  It has
Lol I remember telling someone who said Java was designed for applications programming that I believed I had read it was actually designed for embedded programming. I think you may be able to confirm that?
 everything C++ has without all the cruft, plus an _optional_ gc.  And it
 compiles down to really small binaries.  In essence, it will scale to fit just
 about anything.
Yes I have been meaning to make a list of the things D does /really/ well and I have to say I believe the memory management is sheer genius.
 
 The only other thing that would really get D embedded development going would
be
 a really solid dev-product offering that is bundled with a D compiler.  Google
 for "OOpic" or "Basic Stamp" and you'll see what I mean.
Now that is an interesting idea.
 
Ok I have a question for you. How would you implement the 32-bit 
data type that ranges from 0..1 or -1..1 (a type I am calling 
either percent/upercent or frap/ufrap unless a better suggestion 
is made)?
Probably the most general purpose type would be a simple ratio of two numbers called Fixed or Ratio. class Ratio{ short top, bottom; } This would cover all the cases that a floating-point type cannot including coping with irrational numbers like 1/3. It's also relatively small, and aligns on a 4-byte boundary which is important for memory conservation with small types. The implementation would also be dead-simple as any third-grader could tell you what kind of arithmetic to use. This is a good thing as anybody's first stab at this is likely to be pretty close to optimal.
The problem with that implementation is that you have numbers outside the range 0..1 or -1..1. Here is an extract of the unsigned percent implementation I have been working on which I believe is faster than floating point. Take a look and let me know if I have missed anything that would make it slower than floating point? ================================================================ const up4 UP4_ONE = { value: 0xFFFF_FFFF }; const up4 UP4_ZERO = { value: 0x0000_0000 }; struct up4 { uint value; up4 opAdd( up4 rhs ) { up4 res; res.value = this.value + rhs.value; return res; } up4 opSub( up4 rhs ) { up4 res; res.value = this.value - rhs.value; return res; } up4 opMul( up4 rhs ) { up4 res; uint thisVal = this.value; uint rhsVal = rhs.value; uint resVal; asm { mov EAX, thisVal; xor EDX, EDX; mul rhsVal; mov resVal, EDX; } res.value = resVal; return res; } up4 opDiv( up4 rhs ) { up4 res; uint thisVal = this.value; uint rhsVal = rhs.value; uint resVal; if( rhs.value == 0 ) { throw new Exception( "Division by zero" ); } // avoid division overflow else if( this.value >= rhs.value ) { return UP4_ONE; } asm { // does this work without .value? xor EAX, EAX; mov EDX, thisVal; div rhsVal; mov resVal, EAX; } res.value = resVal; return res; } }
Aug 03 2004
parent reply pragma <EricAnderton at yahoo dot com> <pragma_member pathlink.com> writes:
In article <ceou30$1k6u$1 digitaldaemon.com>, parabolis says...
However I believe DMD is designed to support MS COM which looks 
like a less than optimal design coming from a minimal memory 
standpoint. I admit I laughed at the OOpic mention you made 
later as some pics have less than a hundred bytes of memory. Not 
exactly the kind of environment where you will want objects to 
specialise each other... However this is an extreme case and I 
believe OO could be useful for microcontrollers/small processors.
True, the OOPic is kinda dinky, but it keeps showing up in innumerable hobby projects as it doesn't require extra (and expensive) hardware that other technologies require. If you have a serial port, you can program an OOpic. It's really the ease of adoption factor that I love about the thing. I agree with you completely about OO and hardware. Generally speaking, it can be at cross-purposes with a constricted environment, but I think that embedded software is long overdue for such an improvement.
Actually the more I think about it the bit[] construct in D will 
have unlimited value for smaller applicatoins. Especially for 
directly mapping a bit[] onto a register that deals with pins.
Exactly. Plus your logic gets much more readable with bit-array slicing and such.. much more so than using masks all the time. Not that you *can't* do that in D, only now you don't have to all the time.
Lol I remember telling someone who said Java was designed for 
applications programming that I believed I had read it was 
actually designed for embedded programming. I think you may be 
able to confirm that?
Yes and yes. This tidbit should be able to clear things up: "The Java programming Language evolved from a language named Oak. Oak was developed in the early nineties at Sun Microsystems as a platform-independent language aimed at allowing entertainment appliances such as video game consoles and VCRs to communicate. [...] their focus shifted to the Internet and WebRunner, an Oak-enabled browser, was born. Oak’s name was changed to Java and WebRunner became the HotJava web browser." (From http://www.engin.umd.umich.edu/CIS/course.des/cis400/java/java.html) So basically if it wasn't for the .com-era-thingy, which invited Sun to drag their technology through the quagmire of immature technologies that was the web, Sun could/would have had Oak running on every Tivo, cable box and Nintendo in the world. Instead, by 1997 "Java" became synonymous with "slow webpage" and continues to grow ever more divergent from the (royalty rich) embedded market. D'oh.
Yes I have been meaning to make a list of the things D does 
/really/ well and I have to say I believe the memory management 
is sheer genius.
Yep, although this is truely more of an alchemy of Walter's vision for a better language using Bjarne Stroustrup's gc for C. Personally, I find the paradigm tweaking nature of D's syntax, all by itself, to be a far more profound advantage (however subtle it can be).
The problem with that implementation is that you have numbers 
outside the range 0..1 or -1..1. Here is an extract of the 
unsigned percent implementation I have been working on which I 
believe is faster than floating point. Take a look and let me 
know if I have missed anything that would make it slower than 
floating point?
Okay, I see where you're going with this. Let me take a look.
================================================================
const up4 UP4_ONE =  { value: 0xFFFF_FFFF };
const up4 UP4_ZERO = { value: 0x0000_0000 };
struct up4 {
     uint value;

     up4 opAdd( up4 rhs ) {
         up4 res;
         res.value = this.value + rhs.value;
         return res;
     }

     up4 opSub( up4 rhs ) {
         up4 res;
         res.value = this.value - rhs.value;
         return res;
     }

     up4 opMul( up4 rhs ) {
         up4 res;
         uint thisVal = this.value;
         uint rhsVal = rhs.value;
         uint resVal;
         asm {
             mov EAX, thisVal;
             xor EDX, EDX;
             mul rhsVal;
             mov resVal, EDX;
         }
         res.value = resVal;
         return res;
     }

     up4 opDiv( up4 rhs ) {
         up4 res;
         uint thisVal = this.value;
         uint rhsVal = rhs.value;
         uint resVal;
         if( rhs.value == 0 ) {
             throw new Exception( "Division by zero" );
         }
         // avoid division overflow
         else if( this.value >= rhs.value ) {
             return UP4_ONE;
         }
         asm { // does this work without .value?
             xor EAX, EAX;
             mov EDX, thisVal;
             div rhsVal;
             mov resVal, EAX;
         }
         res.value = resVal;
         return res;
     }
}
It looks a little odd without the typical conversion methods (see below), but I think I see where you're going with this. You're using asm to preserve the high-order bits on multiply and the low-order bits on divide? Have you tested this? Unless I'm mistaken your class maps the values of 0..1 over the range of 0..(2^32-1). So do you think that measuring things in 4billionths is enough precision for most folks? ;) I think this is correct:
 up4 opAssign(real value){
     this.value = cast(uint)(uint.max * value);
     return this;
 }
 real getReal(){
     return cast(real)(1 / this.value);
 }
- Pragma
Aug 03 2004
parent reply parabolis <parabolis softhome.net> writes:
pragma <EricAnderton at yahoo dot com> wrote:

 In article <ceou30$1k6u$1 digitaldaemon.com>, parabolis says...
 
However I believe DMD is designed to support MS COM which looks 
like a less than optimal design coming from a minimal memory 
standpoint. I admit I laughed at the OOpic mention you made 
later as some pics have less than a hundred bytes of memory. Not 
exactly the kind of environment where you will want objects to 
specialise each other... However this is an extreme case and I 
believe OO could be useful for microcontrollers/small processors.
True, the OOPic is kinda dinky, but it keeps showing up in innumerable hobby projects as it doesn't require extra (and expensive) hardware that other technologies require. If you have a serial port, you can program an OOpic. It's really the ease of adoption factor that I love about the thing.
I bought a pic programmer and get to just buy the chip I want for ~$1-5, which is pretty decent considering the number I have killed :) I will have to take a look at the OOPic next time I am in that mindset. I also have been meaning to look into zilog for a few weeks now...
 I agree with you completely about OO and hardware.  Generally speaking, it can
 be at cross-purposes with a constricted environment, but I think that embedded
 software is long overdue for such an improvement.
 
 
 So basically if it wasn't for the .com-era-thingy, which invited Sun to drag
 their technology through the quagmire of immature technologies that was the
web,
 Sun could/would have had Oak running on every Tivo, cable box and Nintendo in
 the world.  Instead, by 1997 "Java" became synonymous with "slow webpage" and
 continues to grow ever more divergent from the (royalty rich) embedded market.
 
Lol SlowWeb(tm)? That definately clears things up. Oak does sound familiar. I always wondered why Sun did not seem to be actually using Java for embeded systems and now it is much cleared.
 
Yes I have been meaning to make a list of the things D does 
/really/ well and I have to say I believe the memory management 
is sheer genius.
Yep, although this is truely more of an alchemy of Walter's vision for a better language using Bjarne Stroustrup's gc for C. Personally, I find the paradigm tweaking nature of D's syntax, all by itself, to be a far more profound advantage (however subtle it can be).
I would be interested to hear what sort of things you found most interesting?
The problem with that implementation is that you have numbers 
outside the range 0..1 or -1..1. Here is an extract of the 
unsigned percent implementation I have been working on which I 
believe is faster than floating point. Take a look and let me 
know if I have missed anything that would make it slower than 
floating point?
Okay, I see where you're going with this. Let me take a look.
================================================================
const up4 UP4_ONE =  { value: 0xFFFF_FFFF };
const up4 UP4_ZERO = { value: 0x0000_0000 };
struct up4 {
    uint value;

    up4 opAdd( up4 rhs ) {
        up4 res;
        res.value = this.value + rhs.value;
        return res;
    }

    up4 opSub( up4 rhs ) {
        up4 res;
        res.value = this.value - rhs.value;
        return res;
    }

    up4 opMul( up4 rhs ) {
        up4 res;
        uint thisVal = this.value;
        uint rhsVal = rhs.value;
        uint resVal;
        asm {
            mov EAX, thisVal;
            xor EDX, EDX;
            mul rhsVal;
            mov resVal, EDX;
        }
        res.value = resVal;
        return res;
    }

    up4 opDiv( up4 rhs ) {
        up4 res;
        uint thisVal = this.value;
        uint rhsVal = rhs.value;
        uint resVal;
        if( rhs.value == 0 ) {
            throw new Exception( "Division by zero" );
        }
        // avoid division overflow
        else if( this.value >= rhs.value ) {
            return UP4_ONE;
        }
        asm { // does this work without .value?
            xor EAX, EAX;
            mov EDX, thisVal;
            div rhsVal;
            mov resVal, EAX;
        }
        res.value = resVal;
        return res;
    }
}
It looks a little odd without the typical conversion methods (see below), but I think I see where you're going with this. You're using asm to preserve the high-order bits on multiply and the low-order bits on divide? Have you tested this?
Yes exactly and yes, at least the up4 struct. The sp4 struct currently has the mul and div commented out because I wanted to think them through a bit more when I get a chance. I can email the code to you if you would like to see it now, or perhaps if you want to use it yourself. I can certainly use any insights I can get.
 Unless I'm mistaken your class maps the values of 0..1 over the range of
 0..(2^32-1).  So do you think that  measuring things in 4billionths is enough
 precision for most folks? ;)
lol, probably not... Consider 80-bit floating point.
 
 I think this is correct:
 
up4 opAssign(real value){
    this.value = cast(uint)(uint.max * value);
    return this;
}
real getReal(){
    return cast(real)(1 / this.value);
}
That is actually the stuff I was about to start working on. Thanks!
Aug 03 2004
parent reply pragma <EricAnderton at yahoo dot com> <pragma_member pathlink.com> writes:
In article <cepa9t$1p6u$1 digitaldaemon.com>, parabolis says...
pragma <EricAnderton at yahoo dot com> wrote:
 Yep, although this is truely more of an alchemy of Walter's vision for a better
 language using Bjarne Stroustrup's gc for C.  Personally, I find the paradigm
 tweaking nature of D's syntax, all by itself, to be a far more profound
 advantage (however subtle it can be). 
I would be interested to hear what sort of things you found most interesting?
Well I think Matthew said it best: Walter has a certain "pragmatic minimalism" about him that lends us to something like (c++)++ ... it's just that much better. He'll often take two wholly unlrealted things to unify syntaxes because he feels they should *parse* the same. This leads to a more unified and easier to read language. 1) D is context free. Every time I see something that holds me back in D is becuase there's a hole in this concept. I've perused the frontend source: IMO, its mostly ready to adopt a solution for any such hole because of how its designed. As each of these holes is filled, as has been happening all along, all sorts of wonderful functionality falls out. 2) Less is more with D. The elimination of the C preprocessor is meshed with most language modifiers to create "attribute blocks". Version and debug replace #ifdef, while public, private, protected, static, etc.. all use the same syntax and fit into and apply to virtually every context within the language. 3) Goodbye template<Syntax_and,Ugliness<good_riddence> >; Although D's template syntax is a work in progress, it more or less exists side-by-side with the more "typical" D grammar. The more I use it, the less I'm inclined to think of it as anything else but "more D". Mixins are also a nice little trist that is new in the C-style family of languages. By comparison, C++ templates are mostly understood to be a language extension that behaves like a 2nd preprocessor without being as as its brother (which isn't "#saying MUCH"). 4) Packing DBC into the language is a good thing. At first glance, it may seem like asomewhat superfluous feature seeing as how C and Java developers have gotten by without it for so long. What it does is allow library developers to provide a solid product that debugs well, but performs fast (in/out blocks dont' compile in release). Plus I can see this being very useful in the classroom when teaching DBC concepts.
 It looks a little odd without the typical conversion methods (see below), but I
 think I see where you're going with this.  You're using asm to preserve the
 high-order bits on multiply and the low-order bits on divide?  Have you tested
 this?
Yes exactly and yes, at least the up4 struct. The sp4 struct currently has the mul and div commented out because I wanted to think them through a bit more when I get a chance. I can email the code to you if you would like to see it now, or perhaps if you want to use it yourself. I can certainly use any insights I can get.
For portability, you may just want to use bit-shifts in D rather than raw asm for the time being. Or better yet use "version(x86){ asm{ /*x86asm*/} }else{ /*D*/}" and just do both! It certainly seems like a good exercise. I personally have no direct use for such a type, but perhaps it may find a place in Demios? Also, I'd seriously look at that link that Ben posted, as GMP might make it a moot point?
 Unless I'm mistaken your class maps the values of 0..1 over the range of
 0..(2^32-1).  So do you think that  measuring things in 4billionths is enough
 precision for most folks? ;)
lol, probably not... Consider 80-bit floating point.
Hehe.. good point. But I've wondered: why 80bit? It doesn't even align on a 4-byte boundary. Why not 96bit instead? ::shrug::
 [snip (code) ]
That is actually the stuff I was about to start working on. Thanks!
NP, don't mention it. - Pragma
Aug 03 2004
parent reply parabolis <parabolis softhome.net> writes:
pragma <EricAnderton at yahoo dot com> wrote:

 In article <cepa9t$1p6u$1 digitaldaemon.com>, parabolis says...
 
pragma <EricAnderton at yahoo dot com> wrote:

Yep, although this is truely more of an alchemy of Walter's vision for a better
language using Bjarne Stroustrup's gc for C.  Personally, I find the paradigm
tweaking nature of D's syntax, all by itself, to be a far more profound
advantage (however subtle it can be). 
I would be interested to hear what sort of things you found most interesting?
Well I think Matthew said it best: Walter has a certain "pragmatic minimalism" about him that lends us to something like (c++)++ ... it's just that much better. He'll often take two wholly unlrealted things to unify syntaxes because he feels they should *parse* the same. This leads to a more unified and easier to read language. 1) D is context free. Every time I see something that holds me back in D is becuase there's a hole in this concept. I've perused the frontend source: IMO, its mostly ready to adopt a solution for any such hole because of how its designed. As each of these holes is filled, as has been happening all along, all sorts of wonderful functionality falls out.
I am actually looking at dmd to understand better how compilers work. I have noticed the docs say there is a firm separation between syntax/semantics/etc... However how does that compare (in your opionion) with gc?
 2) Less is more with D.  The elimination of the C preprocessor is meshed with
 most language modifiers to create "attribute blocks".  Version and debug
replace
 #ifdef, while public, private, protected, static, etc.. all use the same syntax
 and fit into  and apply to virtually every context within the language.
Yes this was a really wise move. I think it might have been harder to justify the continued use of a preprocessor?
 3) Goodbye template<Syntax_and,Ugliness<good_riddence> >;  Although D's
template
 syntax is a work in progress, it more or less exists side-by-side with the more
 "typical" D grammar.  The more I use it, the less I'm inclined to think of it
as
 anything else but "more D".  Mixins are also a nice little trist that is new in
 the C-style family of languages.  By comparison, C++ templates are mostly
 understood to be a language extension that behaves like a 2nd preprocessor
 without being as as its brother (which isn't "#saying MUCH").
lol I am rather interested to see how this turns out. It is a difficult problem. I am not much of a generic programmer myself but I do plan to dabble in this area a bit.
 4) Packing DBC into the language is a good thing.  At first glance, it may seem
 like asomewhat superfluous feature seeing as how C and Java developers have
 gotten by without it for so long.  What it does is allow library developers to
 provide a solid product that debugs well, but performs fast (in/out blocks
dont'
 compile in release).  Plus I can see this being very useful in the classroom
 when teaching DBC concepts.
Yes this was one aspect that kind of looked neat when I first read about it but I am almost taking it for granted already! Actually I have been using unittest quite a bit in my up4/sp4 file. I think what really impressed me the most was the elegence with which properties and attributes work with programmers and the way the compiler seems to store data.
 For portability, you may just want to use bit-shifts in D rather than raw asm
 for the time being.  Or better yet use "version(x86){ asm{ /*x86asm*/} }else{
 /*D*/}" and just do both!
I think I like the versioning option better. Anyone who hits the shift code can write an asm version for their computer.
 It certainly seems like a good exercise.  I personally have no direct use for
 such a type, but perhaps it may find a place in Demios?  Also, I'd seriously
 look at that link that Ben posted, as GMP might make it a moot point?
I actually saw that and looked through it. GMP is geared more larger-than-native types. GMP is to what I want as nukes are to carpet bombing. An example of what I want is perhaps closer to a neural net implementation. Byte-sized might be ok for precision but I am going to want millions of them. :)
Unless I'm mistaken your class maps the values of 0..1 over the range of
0..(2^32-1).  So do you think that  measuring things in 4billionths is enough
precision for most folks? ;)
lol, probably not... Consider 80-bit floating point.
Hehe.. good point. But I've wondered: why 80bit? It doesn't even align on a 4-byte boundary. Why not 96bit instead? ::shrug::
The designers were probably making that calculation on a prototype chip that had math bugs :P However it is odd that you should ask since I have just been looking around at boyer-moore algorithm sites on the web and at moore's site he has a page of "his" best ideas. Among them are the boyer-moore theorem prover (NQTHM) and its proof the "Correctness of the AMD5k86 Floating-Point Division." Another interesting section is stucture sharing which it seems not only helped produce the Warren Abstract Machine (and thus to some extend Prolog) but also won Boyer and Moore the first annual prize from the "Foundation for the Advancement of Computational Logic by the Taking Out of Fingers" :) http://www.cs.utexas.edu/users/moore/best-ideas/index.html
Aug 03 2004
parent reply pragma <EricAnderton at yahoo dot com> <pragma_member pathlink.com> writes:
In article <cepi11$1r8r$1 digitaldaemon.com>, parabolis says...
I am actually looking at dmd to understand better how compilers 
work. I have noticed the docs say there is a firm separation 
between syntax/semantics/etc... However how does that compare 
(in your opionion) with gc?
I never looked at the gc quite like that before. I'd say that the gc exists in the forefront and extreme background of the language, without really touching the language itself. It doesn't impose itself on the language syntax at all, and it shouldn't since it's purely a concept. You have to code with the _awareness_ that a gc is being used which affects your style, but it exists transparently in the background during runtime. The only true interface that it provides is in phobos, which is quite optional and non-intrusive. This is a good thing, since we don't have to worry about juggling memory types all over the place. Imagine how cumbersome D would be with 'malloc', 'pinned', 'copyable' and 'gc' as attributes for references.
 2) Less is more with D. (no more preprocessor)
Yes this was a really wise move. I think it might have been harder to justify the continued use of a preprocessor?
 3) Goodbye template<Syntax_and,Ugliness<good_riddence> >
lol I am rather interested to see how this turns out. It is a difficult problem. I am not much of a generic programmer myself but I do plan to dabble in this area a bit.
It seems like good design to have each "super-grammar" (preprocessor syntax and templates) to have it's own feel, since they do different things. In practice, it has become obvious that you cannot use either or both to their peak potential simply because to do so would severely impact code readability. This is because they can only work when they are all used in the same context. I think you'll find templates easier to grasp in D than in C++, for this very reason. I know I have.
 For portability, you may just want to use bit-shifts in D rather than raw asm
 for the time being.  Or better yet use "version(x86){ asm{ /*x86asm*/} }else{
 /*D*/}" and just do both!
I think I like the versioning option better. Anyone who hits the shift code can write an asm version for their computer.
My thought exactly. Version everything and just stamp "optimized for x86" on your file and move along. :)
 It certainly seems like a good exercise.  I personally have no direct use for
 such a type, but perhaps it may find a place in Demios?  Also, I'd seriously
 look at that link that Ben posted, as GMP might make it a moot point?
I actually saw that and looked through it. GMP is geared more larger-than-native types. GMP is to what I want as nukes are to carpet bombing. An example of what I want is perhaps closer to a neural net implementation. Byte-sized might be ok for precision but I am going to want millions of them. :)
Okay, I see what you're after here: the right tool for the right job. I understand fully as with DSP, I'm using my own parser since most XML parsers are overkill for what I need. Also, .dsp files aren't going to be 100% valid XML all the time anyway. :)
http://www.cs.utexas.edu/users/moore/best-ideas/index.html
Interesting site, thanks for the link. :) Although I must confess, I never was one for the heavily mathematical side of computer science. Personally, I was a little more intrigued by the string searching algorithms, as I've been doing a lot of that lately with DSP. I always thought that using a Bloom Filter would be a great way to pre-optimize a string search, but the Boyer-Moore algorithm looks much nicer. I wonder how the two would compare? - Pragma
Aug 04 2004
parent reply parabolis <parabolis softhome.net> writes:
pragma <EricAnderton at yahoo dot com> wrote:

 In article <cepi11$1r8r$1 digitaldaemon.com>, parabolis says...
 
I am actually looking at dmd to understand better how compilers 
work. I have noticed the docs say there is a firm separation 
between syntax/semantics/etc... However how does that compare 
(in your opionion) with gc?
I never looked at the gc quite like that before. I'd say that the gc exists in
I am really sorry. By gc I mean GNU Compiler and I realized that Garabage Collector is probably the only sane way to parse that. I feel like an idiot. Could you be persuaded to answer the question that replaces gc with GNU Compiler?
 I think you'll find templates easier to grasp in D than in C++, for this very
 reason.  I know I have.
I actaully sort of got C++ templates but never felt comfortable using them because I was never fully comfortable with all the gotcha!s in C++ and wanted to be able to deal with those before dealing with templates. I am rather looking forward to D's templates. >>>It certainly seems like a good exercise. I personally have no direct use for
such a type, but perhaps it may find a place in Demios?  Also, I'd seriously
look at that link that Ben posted, as GMP might make it a moot point?
I actually saw that and looked through it. GMP is geared more larger-than-native types. GMP is to what I want as nukes are to carpet bombing. An example of what I want is perhaps closer to a neural net implementation. Byte-sized might be ok for precision but I am going to want millions of them. :)
Okay, I see what you're after here: the right tool for the right job. I understand fully as with DSP, I'm using my own parser since most XML parsers are overkill for what I need. Also, .dsp files aren't going to be 100% valid XML all the time anyway. :)
I have just discovered mango so I might be looking at DSP in the near future. However the place I am most likely to use any TCP/IP stuff is in a micocontroller as a substitute for CANN networks.
http://www.cs.utexas.edu/users/moore/best-ideas/index.html
Interesting site, thanks for the link. :)
np.
 
 Although I must confess, I never was one for the heavily mathematical side of
 computer science.  Personally, I was a little more intrigued by the string
Actually the logic stuff really intrigues me because Prolog syntax essentially encompasses all Bison/YACC grammers. Consider the following Prolog code from a simple parser example I wrote: expr(Val) --> integer(Val). expr(Val) --> var_name(Name), { get_var(Name,Val) } alphanum(Let) --> uppercase(Let). alphanum(Let) --> lowercase(Let). alphanum(A) --> [A], { A >= 48, A < 58 }. uppercase(Let,[Let|Tail],Tail) :- Let >= 65, Let < 91. lowercase(Let,[Let|Tail],Tail) :- Let >= 97, Let < 123.
 searching algorithms, as I've been doing a lot of that lately with DSP.  I
 always thought that using a Bloom Filter would be a great way to pre-optimize a
 string search, but the Boyer-Moore algorithm looks much nicer.  I wonder how
the
 two would compare?
I was not familiar with Bloom Filters. I found an example at perl.com. I cannont imagine a Boyer-Moore search being used in a general hashtable. It is more useful for finding something in a really long target (eg searching the contents of an entire file for a pattern). The reason is that BM creates a couple arrays (one the size of the alphabet, 64k or 256 bytes) to perform its magic. If you can swallow the overhead then you will not look at some/many/most of the characters in the file.
Aug 04 2004
parent reply pragma <EricAnderton at yahoo dot com> <pragma_member pathlink.com> writes:
In article <cer33b$6fe$1 digitaldaemon.com>, parabolis says...
I am really sorry. By gc I mean GNU Compiler and I realized that 
Garabage Collector is probably the only sane way to parse that. 
I feel like an idiot. Could you be persuaded to answer the 
question that replaces gc with GNU Compiler?
Well I haven't really looked much at the GCC source, so I really can't say. Judging by how the front and backends are interchangeable, i'd say its a fair bet that it meets this syntax/semantic divide fairly well. The fact that the DMD frontend is both a frontend for Digital Mars' C compiler and GCC, i'd say that this is more than likely.
I have just discovered mango so I might be looking at DSP in the 
near future. However the place I am most likely to use any 
TCP/IP stuff is in a micocontroller as a substitute for CANN 
networks.
CANN = Cognitive Artifical Neural Network? If that's the case then yea, mango is way over developed for what you're looking for. As for DSP, I'm currently working as hard as I can to get beta 1 out the door.
 searching algorithms, as I've been doing a lot of that lately with DSP.  I
 always thought that using a Bloom Filter would be a great way to pre-optimize a
 string search, but the Boyer-Moore algorithm looks much nicer.  I wonder how
the
 two would compare?
I was not familiar with Bloom Filters. I found an example at perl.com. I cannont imagine a Boyer-Moore search being used in a general hashtable.
Bloom Filters work well when you need to find a match in a very large set. Rather than just search all the hashcodes for an exact match, you'd first check against a reduced hash value. In this case, that reduced hash value is really just one bit in a binary map, which is itself a hashed value. //D makes this a cinch. struct BloomFilter{ bit[128] filter; bit isPossibleMatch(uint hashcode){ return(filter[hashcode % 128]); } void addToFilter(uint hashcode){ filter [hashcode % 128] = 1; } } The idea is that for larger the size of the filter, the fewer false positivies you will get. As it is probabalistic, you still need to backup a "positive" match with a search just incase the filter lied to you. Also the space and computational cost of the filter are both very small (compared to the size of the map, which is usually large for such applications) for such a good eliminiation technique.
It is more useful for finding something in a really long target 
(eg searching the contents of an entire file for a pattern). The 
reason is that BM creates a couple arrays (one the size of the 
alphabet, 64k or 256 bytes) to perform its magic. If you can 
swallow the overhead then you will not look at some/many/most of 
the characters in the file.
I'm probably going to wind up using this when I eventually have to refactor DSP. Its ideal since this is how my application is set up: a whole bunch of discrete keys that don't change vs a limitless number of possible data sets to search. - Pragma
Aug 04 2004
parent parabolis <parabolis softhome.net> writes:
pragma <EricAnderton at yahoo dot com> wrote:
 In article <cer33b$6fe$1 digitaldaemon.com>, parabolis says...
 
I am really sorry. By gc I mean GNU Compiler and I realized that 
Garabage Collector is probably the only sane way to parse that. 
I feel like an idiot. Could you be persuaded to answer the 
question that replaces gc with GNU Compiler?
Well I haven't really looked much at the GCC source, so I really can't say. Judging by how the front and backends are interchangeable, i'd say its a fair bet that it meets this syntax/semantic divide fairly well. The fact that the DMD frontend is both a frontend for Digital Mars' C compiler and GCC, i'd say that this is more than likely.
I was contemplating writing a frontend for a language but the docs at the time were non-existant. I think it can be fairly easy to generalise C type languages but if you have a completely unique semantic view than C you will have to write the whole thing yourself...
I have just discovered mango so I might be looking at DSP in the 
near future. However the place I am most likely to use any 
TCP/IP stuff is in a micocontroller as a substitute for CANN 
networks.
CANN = Cognitive Artifical Neural Network? If that's the case then yea, mango is way over developed for what you're looking for. As for DSP, I'm currently working as hard as I can to get beta 1 out the door.
Controller Area Network (CAN) Wherever did that exra N come from? (sorry again) I will eventually be facing the problem of connecting many microcontrollers. I have not yet seriously considered the problem but I am remembering that I^2C and CAN are candidates... The overhead for a TCP/IP stack just makes everything more expensive. I think the area I am interested in what ought to be known as paralell computing as opposed to the field that term applies to in which monolithic serial computers are used in paralell. It is much simpler to build a semantic network (for example) from small microcontroller networks than to simulate a small network on much faster monolithic serial computer.
searching algorithms, as I've been doing a lot of that lately with DSP.  I
always thought that using a Bloom Filter would be a great way to pre-optimize a
string search, but the Boyer-Moore algorithm looks much nicer.  I wonder how the
two would compare?
I was not familiar with Bloom Filters. I found an example at perl.com. I cannont imagine a Boyer-Moore search being used in a general hashtable.
Bloom Filters work well when you need to find a match in a very large set. Rather than just search all the hashcodes for an exact match, you'd first check against a reduced hash value. In this case, that reduced hash value is really just one bit in a binary map, which is itself a hashed value. //D makes this a cinch. struct BloomFilter{ bit[128] filter; bit isPossibleMatch(uint hashcode){ return(filter[hashcode % 128]); } void addToFilter(uint hashcode){ filter [hashcode % 128] = 1; } } The idea is that for larger the size of the filter, the fewer false positivies you will get. As it is probabalistic, you still need to backup a "positive" match with a search just incase the filter lied to you. Also the space and computational cost of the filter are both very small (compared to the size of the map, which is usually large for such applications) for such a good eliminiation technique.
I can see why this would help alleviate chaining in a hashtable. Or at least some of the side effects.
 
 
 
It is more useful for finding something in a really long target 
(eg searching the contents of an entire file for a pattern). The 
reason is that BM creates a couple arrays (one the size of the 
alphabet, 64k or 256 bytes) to perform its magic. If you can 
swallow the overhead then you will not look at some/many/most of 
the characters in the file.
I'm probably going to wind up using this when I eventually have to refactor DSP. Its ideal since this is how my application is set up: a whole bunch of discrete keys that don't change vs a limitless number of possible data sets to search.
Provided the alpabet of possible keys is small (~8-bit) this should work well. I am not able to construct much of a model of what you are doing from your description however...
Aug 04 2004
prev sibling parent Stephen Waits <steve waits.net> writes:
pragma <EricAnderton at yahoo dot com> wrote:
 In article <ceoi6l$1dmf$1 digitaldaemon.com>, Stephen Waits says...
 
pragma <EricAnderton at yahoo dot com> wrote:
In article <celv5s$9ag$1 digitaldaemon.com>, parabolis says...

The sin() and cos() tables?
Lookup tables should actually be slower on just about any modern system.
I'll have to both agree and disagree with your statement here. I agree that sin/cos lookups are hardly needed anymore on modern hardware since you're no longer worried about an fmul taking 27 clock cycles to complete. Also, the potential cache-miss for accessing the table might well override any advantage in side-stepping such math. But I don't agree that a lookup scheme for calculations is something wholly without merit, as there are plenty of examples of applications out there today that do this sort of pre-calculation. The difference is that the scale of pre-optimization has risen to take advantage of larger memory spaces than were available before. Plus there's still plenty of embedded systems out there that use slower chips and run with far less ram than our PC's.
Right, basically, as I was referring only to the sin/cos (and, implicitly, tan, sqrt, etc.); so we agree. Lookup tables certainly still have their place.
Someone's already written a bigint library which might fit 
your needs here.
Yep. Arcane Jill's class comes pretty close to the mark. All it really needs is a fractional portion or exponent and a way to perform math without loss of data.
Just to clarify - I only meant to suggest that as a starting point. Good luck. --Steve
Aug 03 2004
prev sibling parent reply Ilya Minkov <minkov cs.tum.edu> writes:
Stephen Waits schrieb:

 Lookup tables should actually be slower on just about any modern system.
This is not strictly true. If all you need is a byte, then there will be more hoops in doing the operation on the FPU that fetching from the table. One must just consider that the table has to be very small to be efficient. Something past a few kilobytes isn't, but the sintable is. Besides, D should have a potential to run on a handheld which doesn't have an FPU. -eye
Aug 19 2004
parent Stephen Waits <steve waits.net> writes:
Ilya Minkov wrote:
 Stephen Waits schrieb:
 
 Lookup tables should actually be slower on just about any modern system.
This is not strictly true.
Which is why I said "just about any", and not "every".
 Besides, D should have a potential to run on a handheld which doesn't 
 have an FPU.
The handheld I code on has a great FPU :) --Steve
Aug 19 2004
prev sibling parent reply Ilya Minkov <minkov cs.tum.edu> writes:
There is another field where floating-point has already caused a great=20
grief, namely sound processing. When chaining filters, one day the=20
situation comes that a soft, inaudible value gets multipled by another=20
tiny value. What happens then? The number cannot be represented in the=20
range and it goes into denormalized representation. The problem is, that =

working with denormalized numbers takes a few dozen times longer than=20
with normal ones. And since the numbers don't come alone, but as a=20
stream of similarly small numbers, the performance drops by a few times=20
processing something one doesn't want to hear.

There are different solutions, the most often is to insert a filter at=20
some point that zeroes out the small numbers. Some do it by matching the =

bits, some by adding and subtracting a small value... But why not simply =

drop it and use fixedpoint?

I think fixedpoint is simply wonderful for that, even if it wouldn't be=20
very precise (borderline conditions such as +-inf and nan, loosing bits=20
at division, etc.).

-eye

pragma <EricAnderton at yahoo dot com> schrieb:

 In article <ceks4g$2sc6$1 digitaldaemon.com>, Jonathan Leffler says...
=20
Here are two links, one to a site with a lot of information about=20
decimal arithmetic (loosely related) and one to a rather famous paper
with the title "What Every Computer Scientist Should Know About=20
Floating-Point Arithmetic" (linked from the decimal site at Hursley).=20

[snip]
=20 =20 Thank you John, for these links. They're a great read. :) =20 I haven't seen anything on this topic since I was hand-coding arithmeti=
c in ASM
 to speed up graphics code on my 286.  No math co-processor, so floating=
point
 calculations were pretty much out of the question.  Even integer multip=
ulcation
 would cost you plenty of precious cycles.  Simple byte/byte ratios usua=
lly did
 the trick for crude 3d matrix operations.  Everything else relied on
 pre-calculated tables for sin() and cos() values... those were the days=
=2E
=20
 Nostalgia aside, I think this kind of arithmetic could have some very i=
mpressive
 benefits for D.  Promoting D as a language that supports "perfect preci=
sion" for
 financial and scientific programming comes to mind.
=20
 For those of you that have doubts that this is useful, I urge you to ta=
ke a look
 at the first site that Jonathan linked:
=20
 http://www2.hursley.ibm.com/decimal/
=20
 "Most computers today support binary floating-point in hardware. While =
suitable
 for many purposes, binary floating-point arithmetic should not be used =
for
 financial, commercial, and user-centric applications or web services be=
cause the
 decimal data used in these applications cannot be represented exactly u=
sing
 binary floating-point."
=20
 This is backed up in the FAQ on the site:=20
=20
 http://www2.hursley.ibm.com/decimal/decifaq1.html#inexact
=20
 The implementation specification they have put forward is for Java (of =
course),
 but could easily be done with D.  What is important is that it supports=
"=B10,
 =B1Infinity, and Not-a-Number (NaN)" which would allow it to inter-oper=
ate with
 D's floating point and complex types rather nicely.
=20
 http://www2.hursley.ibm.com/decimal/dbspec.html
 http://www2.hursley.ibm.com/decimal/decbits.html
=20
 - Pragma
=20
=20
=20
Aug 19 2004
parent reply "chrisj" <no.spam for.me> writes:
In audio the main cause of denormal numbers are IIR filters. Fixed point is
not good for IIR filters, the lack of precision can cause instability.
Because of the highly recursive nature small errors become big errors very
quickly. Most people use doubes or extended for filters anyway. Yamaha do
use fixed point in their digital mixing desks and have custom chips that do
the eq(IIR) in 44 bit precision. And it could be argued that eq is not that
as suceptible to the lack of precision that some other IIRs may be.

So the main cause of denormal numbers is not best suited to fixed point
anyway. And fixing the denormal problem is very simple. Far simpler than
writing your audio algorythms in fixed point.

And it is a cpu design bug/problem, not a problem with floating point per
se.

Just my 2c.

chris



"Ilya Minkov" <minkov cs.tum.edu> wrote in message
news:cg30jh$2d6r$1 digitaldaemon.com...
There is another field where floating-point has already caused a great
grief, namely sound processing. When chaining filters, one day the
situation comes that a soft, inaudible value gets multipled by another
tiny value. What happens then? The number cannot be represented in the
range and it goes into denormalized representation. The problem is, that
working with denormalized numbers takes a few dozen times longer than
with normal ones. And since the numbers don't come alone, but as a
stream of similarly small numbers, the performance drops by a few times
processing something one doesn't want to hear.

There are different solutions, the most often is to insert a filter at
some point that zeroes out the small numbers. Some do it by matching the
bits, some by adding and subtracting a small value... But why not simply
drop it and use fixedpoint?

I think fixedpoint is simply wonderful for that, even if it wouldn't be
very precise (borderline conditions such as +-inf and nan, loosing bits
at division, etc.).

-eye

pragma <EricAnderton at yahoo dot com> schrieb:

 In article <ceks4g$2sc6$1 digitaldaemon.com>, Jonathan Leffler says...

Here are two links, one to a site with a lot of information about
decimal arithmetic (loosely related) and one to a rather famous paper
with the title "What Every Computer Scientist Should Know About
Floating-Point Arithmetic" (linked from the decimal site at Hursley).

[snip]
Thank you John, for these links. They're a great read. :) I haven't seen anything on this topic since I was hand-coding arithmetic
in ASM
 to speed up graphics code on my 286.  No math co-processor, so floating
point
 calculations were pretty much out of the question.  Even integer
multipulcation
 would cost you plenty of precious cycles.  Simple byte/byte ratios usually
did
 the trick for crude 3d matrix operations.  Everything else relied on
 pre-calculated tables for sin() and cos() values... those were the days.

 Nostalgia aside, I think this kind of arithmetic could have some very
impressive
 benefits for D.  Promoting D as a language that supports "perfect
precision" for
 financial and scientific programming comes to mind.

 For those of you that have doubts that this is useful, I urge you to take
a look
 at the first site that Jonathan linked:

 http://www2.hursley.ibm.com/decimal/

 "Most computers today support binary floating-point in hardware. While
suitable
 for many purposes, binary floating-point arithmetic should not be used for
 financial, commercial, and user-centric applications or web services
because the
 decimal data used in these applications cannot be represented exactly
using
 binary floating-point."

 This is backed up in the FAQ on the site:

 http://www2.hursley.ibm.com/decimal/decifaq1.html#inexact

 The implementation specification they have put forward is for Java (of
course),
 but could easily be done with D.  What is important is that it supports
"±0,
 ±Infinity, and Not-a-Number (NaN)" which would allow it to inter-operate
with
 D's floating point and complex types rather nicely.

 http://www2.hursley.ibm.com/decimal/dbspec.html
 http://www2.hursley.ibm.com/decimal/decbits.html

 - Pragma
Aug 19 2004
parent reply Ilya Minkov <minkov cs.tum.edu> writes:
chrisj schrieb:

 In audio the main cause of denormal numbers are IIR filters. Fixed point is
 not good for IIR filters, the lack of precision can cause instability.
 Because of the highly recursive nature small errors become big errors very
 quickly. Most people use doubes or extended for filters anyway. Yamaha do
 use fixed point in their digital mixing desks and have custom chips that do
 the eq(IIR) in 44 bit precision. And it could be argued that eq is not that
 as suceptible to the lack of precision that some other IIRs may be.
Interesting. Though i question whether these errors really matter on sound. I'd be probably giving it 4 bits for the whole part, the rest of 28 bits for the fractional part. At the end i only need the 16 bits in the upper middle. Probably Yamaha decks aim for much, much higher quality than myself.
 So the main cause of denormal numbers is not best suited to fixed point
 anyway. And fixing the denormal problem is very simple. Far simpler than
 writing your audio algorythms in fixed point.
Simpler? Well yes it is when the rest is already there, but on the other hand writing a completely fixpoint synth would be a valid experiment all by itself. If it fails - well, then i at least get a synth with a unique sound. :))))) Immensely popular SID comes to mind.
 And it is a cpu design bug/problem, not a problem with floating point per
 se.
True, but it is not an x86 only problem, it is the same on PowerPC and presumably other systems. And seeing how intel has ignored it and set the denormalization margin one order of magnitude higher than common on pentium4, it's hardly gonna go away and has to be coped with. -eye
Aug 19 2004
parent "chrisj" <no.spam for.me> writes:
"Ilya Minkov" <minkov cs.tum.edu> wrote in message
news:cg37jm$2iri$1 digitaldaemon.com...
 chrisj schrieb:

 In audio the main cause of denormal numbers are IIR filters. Fixed point
is
 not good for IIR filters, the lack of precision can cause instability.
 Because of the highly recursive nature small errors become big errors
very
 quickly. Most people use doubes or extended for filters anyway. Yamaha
do
 use fixed point in their digital mixing desks and have custom chips that
do
 the eq(IIR) in 44 bit precision. And it could be argued that eq is not
that
 as suceptible to the lack of precision that some other IIRs may be.
Interesting. Though i question whether these errors really matter on sound. I'd be probably giving it 4 bits for the whole part, the rest of 28 bits for the fractional part. At the end i only need the 16 bits in the upper middle. Probably Yamaha decks aim for much, much higher quality than
myself. Most people writing audio plugins seem to be using double precision for filters and coeficiants. The reason is that the high level of feedback within an IIR can amplify small errors very quickly. Its not errors that you hear directly, but it is errors that can affect the stability of the filter. So the filter might ocaisionaly spew out some odd noise or the signal might be distorted.
 So the main cause of denormal numbers is not best suited to fixed point
 anyway. And fixing the denormal problem is very simple. Far simpler than
 writing your audio algorythms in fixed point.
Simpler? Well yes it is when the rest is already there, but on the other hand writing a completely fixpoint synth would be a valid experiment all by itself. If it fails - well, then i at least get a synth with a unique sound. :))))) Immensely popular SID comes to mind.
Some curent hardware synths use fixed point. Waldorf MicroQ, Access Virus that i know of. But i think that has more to do with fixed point dsp chips being alot cheaper. So it does definatly work, it's just there are more things to consider, making the most of the bits availible to you. Being carefull of the gain structure and that any intermediate values don't lose precison or overflow. Acumulation of rounding errors. Ect.. Things you can (almost) forget about with floating point. But as this is just 2nd hand information I may be overstating it, I dont have any personal experience of doing audio dsp in fixed point.
 And it is a cpu design bug/problem, not a problem with floating point
per
 se.
True, but it is not an x86 only problem, it is the same on PowerPC and presumably other systems. And seeing how intel has ignored it and set the denormalization margin one order of magnitude higher than common on pentium4, it's hardly gonna go away
Tbh i dont see the point :-) in denormals, it seems like one of those ideas that must have sounded good at the time but looks daft in hindsight. I supose when the spec was being drawn up 10 or 15 years ago those extra bits of precision seemed like a big deal.
 and has to be coped with.
Yes it has to be coped with but it is very easy to fix. Just a few lines of asembler can do it, put it at the front of any IIR filters and you're sorted. chris
Aug 19 2004