www.digitalmars.com         C & C++   DMDScript  

digitalmars.D - real to int: rounding?

reply "Lionello Lunesu" <lio lunesu.removethis.com> writes:
When casting a float/real to an int, D (like C) rounds down. But wouldn't 
many float-int problems be solved by rounding to nearest?

For example that 10^2 bug in the bugs newsgroup. pow(10,2) gives 99.9/ which 
would result in 100 when cast to int.

As for performance, the FPU just as happily rounds to nearest. In fact, I 
think it's the default.

Portability of C code that expects rounding-down would get tricky, though.

What was the reason for rounding down in C? Does anybody know? Is it more 
logical? I don't quite get it.

Lio. 
May 05 2005
next sibling parent reply Stewart Gordon <smjg_1998 yahoo.com> writes:
Lionello Lunesu wrote:
 When casting a float/real to an int, D (like C) rounds down. But wouldn't 
 many float-int problems be solved by rounding to nearest?
 
 For example that 10^2 bug in the bugs newsgroup. pow(10,2) gives 99.9/ which 
 would result in 100 when cast to int.

As I try it, I get power.d:5: function std.math.pow overloads real(real x,uint n) and real(real x,real y) both match argument list for pow But that might be just gdc. (Which I've just installed on this Mac, so I can finally try stuff out here as well!)
 As for performance, the FPU just as happily rounds to nearest. In fact, I 
 think it's the default.

How does your FPU round .5s?
 Portability of C code that expects rounding-down would get tricky, though.
 
 What was the reason for rounding down in C? Does anybody know? Is it more 
 logical? I don't quite get it.

My guess is that it was to match the behaviour of integer division, which was in turn defined to support easy generation of (quotient, remainder) pairs. Stewart. -- My e-mail is valid but not my primary mailbox. Please keep replies on the 'group where everyone may benefit.
May 05 2005
parent "Lionello Lunesu" <lio lunesu.removethis.com> writes:
 For example that 10^2 bug in the bugs newsgroup. pow(10,2) gives 99.9/ 
 which would result in 100 when cast to int.

As I try it, I get power.d:5: function std.math.pow overloads real(real x,uint n) and real(real x,real y) both match argument list for pow

I guess it should be pow(10.0,2.0)
 As for performance, the FPU just as happily rounds to nearest. In fact, I 
 think it's the default.

How does your FPU round .5s?

Hmm, yeah, if I do it in inline asm (fld, fsti) with VC6 I get 0 for 0.5 and 1 for anything slightly higher. I don't know whether there's a FPU flag to change this behaviour (as there is for rounding down, up, nearest).
 What was the reason for rounding down in C? Does anybody know? Is it more 
 logical? I don't quite get it.

My guess is that it was to match the behaviour of integer division, which was in turn defined to support easy generation of (quotient, remainder) pairs.

Good point. int i = 10/3 will result in the same value as int i=10.0/3.0.. Seems consistent. L.
May 06 2005
prev sibling parent reply Kevin Bealer <Kevin_member pathlink.com> writes:
In article <d5chpg$ku2$1 digitaldaemon.com>, Lionello Lunesu says...
When casting a float/real to an int, D (like C) rounds down. But wouldn't 
many float-int problems be solved by rounding to nearest?

For example that 10^2 bug in the bugs newsgroup. pow(10,2) gives 99.9/ which 
would result in 100 when cast to int.

As for performance, the FPU just as happily rounds to nearest. In fact, I 
think it's the default.

Portability of C code that expects rounding-down would get tricky, though.

What was the reason for rounding down in C? Does anybody know? Is it more 
logical? I don't quite get it.

Lio. 

From the general theme of C's design, I would guess it is performance. You can round down via bit shifting. If you want to round toward zero or toward the nearest, I think you need a few more instructions. C specified a LOT of things that were supposed to be implemented in "whatever is the fastest way". For example, "i = j++ + --j;" is an undefined piece of code. If the compiled code divides by zero, chases a null pointer, or whatever here, the programmer gets the blame, because somewhere in the C spec it says "don't do that". I can't imagine why this is both legal and undefined, but apparently it is easier for someone, somewhere, or allows you to produce faster compiled code somehow? But I suspect FPU or fortran design is a stronger influence -- if fortran did XYZ, and FPUs followed fortran, then... I don't know the history, though. long round_toward_nearest(double x) { return cast(long)(x + 0.5); } long round_toward_zero(double x) { // (works on systems that round down OR toward zero.) return (x > 0.0) ? (cast(long) x) : -(cast(long)-x); } Kevin
May 06 2005
next sibling parent "Lionello Lunesu" <lio lunesu.removethis.com> writes:
 From the general theme of C's design, I would guess it is performance. 
 You can
 round down via bit shifting.  If you want to round toward zero or toward 
 the
 nearest, I think you need a few more instructions.  C specified a LOT of 
 things
 that were supposed to be implemented in "whatever is the fastest way".

Whatever was the fastest way THEN. :-S
 For example, "i = j++ + --j;" is an undefined piece of code.  If the 
 compiled
 code divides by zero, chases a null pointer, or whatever here, the 
 programmer
 gets the blame, because somewhere in the C spec it says "don't do that". 
 I
 can't imagine why this is both legal and undefined, but apparently it is 
 easier
 for someone, somewhere, or allows you to produce faster compiled code 
 somehow?

That's kind-of the problem I have with D's C++ preference: how many decisions in C/C++ were taken based on arguments that no longer apply? D should be careful with copying stuff from the old languages.
 But I suspect FPU or fortran design is a stronger influence -- if fortran 
 did
 XYZ, and FPUs followed fortran, then... I don't know the history, though.

 long round_toward_nearest(double x)
 {
 return cast(long)(x + 0.5);
 }

You have the extra addition and then the rounding-down cast. If the FPU is not setup for rounding-down, the rounding will have to be done by even more FPU instructions. Seems a waste of fpu if you know that "asm fld [r], asm fsti [i]" rounds to nearest.
 long round_toward_zero(double x)
 {
 // (works on systems that round down OR toward zero.)
 return (x > 0.0) ? (cast(long) x) : -(cast(long)-x);
 }

Again, the FPU should do all this. I don't want to write code for something that I know can be done easily by the processor. This function even has a branch in it. "?" must be worst operator ever: the convenience of typing it completely hides the impact on performance. L.
May 06 2005
prev sibling parent Sean Kelly <sean f4.ca> writes:
In article <d5f585$2s82$1 digitaldaemon.com>, Kevin Bealer says...
For example, "i = j++ + --j;" is an undefined piece of code.  If the compiled
code divides by zero, chases a null pointer, or whatever here, the programmer
gets the blame, because somewhere in the C spec it says "don't do that".  I
can't imagine why this is both legal and undefined, but apparently it is easier
for someone, somewhere, or allows you to produce faster compiled code somehow?

If this were made an error then compilers would have to detect the problem reliably. Simple example, but what about this: int a[2]; int& c = a[0]; int* d = &a[0]; void f(int& d, int& e) { d = ++a[0] + --e + a[1]; } void main() { f(c,*(++d)); } By making the behavior simply undefined they free the compiler writer from the burden of complex code analysis, which was likely an important consideration. Sean
May 06 2005