## digitalmars.D - Proposal: fixing the 'pure' floating point problem.

• Don (60/60) Mar 13 2009 Consider this code:
• Denis Koroskin (8/68) Mar 13 2009 Does it mean that *any* pure function that has floating-point arithmenti...
• Don (17/111) Mar 13 2009 No. The status parameter is a register on the FPU. You don't have any
• Joel C. Salomon (19/26) Mar 13 2009 IEEE 754-2008, §4 (Attributes and Rounding), prescribes:
• Don (6/37) Mar 13 2009 But there's only 2 modes that are important in practice: 'default' and
• Walter Bright (7/7) Mar 13 2009 While it's a good suggestion, I think there's a fundamental problem with...
• Daniel Keep (5/12) Mar 13 2009 I thought the behaviour was that if you call a function in a
• Jason House (2/9) Mar 13 2009 Maybe I misinderstood, but I thought anything marked module(system) coul...
• Walter Bright (4/19) Mar 13 2009 No, all system means is no checking is done for safe mode, even if the
• Don (8/15) Mar 13 2009 That's true, but if you're in a floatingpoint module, and you call a
• Walter Bright (3/11) Mar 13 2009 Perhaps we can go with something simpler. If you call a pure function,
• Don (2/15) Mar 14 2009 But then nothing in std.math can be pure.
• Don (19/35) Mar 14 2009 My proposal is pretty simple -- I doubt we can come up with anything
• Walter Bright (5/43) Mar 14 2009 I'm still not seeing the difference between this and saying that for
• Don (4/49) Mar 14 2009 The math functions need to work for any rounding mode, not just the
• Walter Bright (3/6) Mar 14 2009 Ok, then std.math functions cannot be pure in either your or my
• Andrei Alexandrescu (8/15) Mar 14 2009 Ok, I need to clear some ignorance on my part:
• Walter Bright (10/15) Mar 14 2009 Given a complex calculation, one might want to know how sensitive the
• Don (25/32) Mar 14 2009 They _can_ be pure in my proposal.
• Don (26/43) Mar 15 2009 Or to put it another way -- in my proposal, we acknowledge that these
• Walter Bright (25/25) Mar 15 2009 Let's say we have A which is in a floatingpoint module, B which is in a
• Don (11/39) Mar 15 2009 A has called a function in B. B is not a floatingpoint module, so b()
• Walter Bright (6/16) Mar 15 2009 Ok, this was the missing piece in my understanding of the proposal.
• Sergey Gromov (23/32) Mar 15 2009 In Don's proposal, the following is legal:
• Don (4/41) Mar 16 2009 Hooray! Someone's understood the proposal.
• Michel Fortin (38/66) Mar 16 2009 Interestingly, it's almost the same thing as I proposed earlier in this
• Don (8/87) Mar 16 2009 That requires a new keyord, four new calling conventions, a new name
• Michel Fortin (37/43) Mar 16 2009 Which isn't much different as adding a new extern(x) option, for which
• Don (21/71) Mar 16 2009 You have to be able to SET the rounding mode. In the dynamic case, you
• Don (32/52) Mar 16 2009 I'm proposing that std.math would be floatingpoint. The docs contain
• Walter Bright (8/70) Mar 17 2009 I agree that's a problem.
• Don (32/52) Mar 17 2009 It can certainly be floatingpoint. That's the whole point of the
• Sergey Gromov (20/40) Mar 17 2009 Let's see.
• Michel Fortin (20/33) Mar 14 2009 Just dumping another idea in the suggestion box. Perhpas for all
• Don (7/43) Mar 14 2009 I think so. My proposal is basically like that, except that it asserts
• Philip Miess (20/33) Apr 04 2009 Walter,
• Philip Miess (8/8) Apr 04 2009 of course my example makes no sense
• Don (10/51) Apr 04 2009 That's actually a LOT more complicated than my suggestion. Also, it's
• Philip Miess (34/90) Apr 10 2009 Don,
• Don (26/123) Apr 14 2009 Unfortunately, it _does_ require compiler changes. Here are some issues:
• Joel C. Salomon (3/10) Mar 14 2009 So in 754-2008 terms, the mode is *always* set to “dynamic”?
• Christopher Wright (3/7) Mar 13 2009 Is compiler-determined memoization a confirmed feature in a near
• bearophile (4/6) Mar 15 2009 Because in technology lot of things aren't determined on technological m...
• Daniel Keep (4/12) Mar 15 2009 Interesting article on the history of 754:
Don <nospam nospam.com> writes:
```Consider this code:
double add(double x, double y) {
return x + y;
}
Is this code pure? Is it nothrow?
Technically, it isn't. If you change the floating-point rounding mode on
the processor, you get a different result for exactly the same inputs.
If you change the floating-point traps, it could throw a floating-point
overflow exception.

One Draconian solution would be to say that this code is NOT pure. This
would mean that floating-point could not be used _at all_ in pure code.
Therefore, a template function like this could probably not be pure:
T add(T)(T x, T y) { return x + y; }
And everything unravels from there.

Another solution would be to simply ignore the floating point flags, and
mark the relevant functions as pure anyway.  That would be a shame --
DMD goes to a lot of trouble to ensure that they remain valid (which is
one of the reasons why DMD's floating point code is so slow). We'd be
well behind C99 and C++ in support for IEEE floating-point. I don't like
that much.

--- A solution ---

Extend the parametrized module declaration to include something like
module(system, floatingpoint)
as well as
module(system).

This would indicate that the module is floating-point aware. Every
function in that module has two implicit inout parameters: the floating
point status and control registers. This matters ONLY if the compiler
chooses to cache the result of any function in that module which is
marked as 'pure', it must also check the floating-point status and
control, if the function is called from inside any floatingpoint module.
(Most likely, the compiler would simply not bother caching purity of
floating-point modules). This ensures purity is preserved, _and_ the
advanced floating point features remain available.

Functions inside a floating-point aware module would behave exactly as
they do in normal D.

And now comes the big win. The compiler knows that if a module is not
"floatingpoint", the status flags and rounding DO NOT MATTER. It can
assume the floating-point default situation (round-to-nearest, no
floating point exceptions activated). This allows the compiler to
optimize more aggressively. Of course, it is not required to do so.

Note: The compiler can actually cache calls to pure functions defined in
"floatingpoint" modules in the normal way, since even though the
isn't interested. But I doubt the compiler would bother.

This proposal is a little similar to the "Borneo" programming language
proposal for Java,
http://sonic.net/~jddarcy/Borneo/
which was made by one of William Kahan's students. He proposed
annotating every function, specifying which floating point exceptions it
may read or write. In my opinion, that's massive overkill -- it's only
time, you don't. And even when you do care about it, it will only be in
small number of modules.

Since DMD doesn't cache pure functions, it doesn't require any changes
to support this (other than the module statement).

BTW, module(floatingpoint) is just a suggestion. I'm sure someone could
come up with a better term. It could even be really verbose, it's hardly
ever going to be used.

Don.
```
Mar 13 2009
"Denis Koroskin" <2korden gmail.com> writes:
```On Fri, 13 Mar 2009 13:52:08 +0300, Don <nospam nospam.com> wrote:

Consider this code:
double add(double x, double y) {
return x + y;
}
Is this code pure? Is it nothrow?
Technically, it isn't. If you change the floating-point rounding mode on
the processor, you get a different result for exactly the same inputs.
If you change the floating-point traps, it could throw a floating-point
overflow exception.

One Draconian solution would be to say that this code is NOT pure. This
would mean that floating-point could not be used _at all_ in pure code.
Therefore, a template function like this could probably not be pure:
T add(T)(T x, T y) { return x + y; }
And everything unravels from there.

Another solution would be to simply ignore the floating point flags, and
mark the relevant functions as pure anyway.  That would be a shame --
DMD goes to a lot of trouble to ensure that they remain valid (which is
one of the reasons why DMD's floating point code is so slow). We'd be
well behind C99 and C++ in support for IEEE floating-point. I don't like
that much.

--- A solution ---

Extend the parametrized module declaration to include something like
module(system, floatingpoint)
as well as
module(system).

This would indicate that the module is floating-point aware. Every
function in that module has two implicit inout parameters: the floating
point status and control registers. This matters ONLY if the compiler
chooses to cache the result of any function in that module which is
marked as 'pure', it must also check the floating-point status and
control, if the function is called from inside any floatingpoint module.
(Most likely, the compiler would simply not bother caching purity of
floating-point modules). This ensures purity is preserved, _and_ the
advanced floating point features remain available.

Functions inside a floating-point aware module would behave exactly as
they do in normal D.

And now comes the big win. The compiler knows that if a module is not
"floatingpoint", the status flags and rounding DO NOT MATTER. It can
assume the floating-point default situation (round-to-nearest, no
floating point exceptions activated). This allows the compiler to
optimize more aggressively. Of course, it is not required to do so.

Note: The compiler can actually cache calls to pure functions defined in
"floatingpoint" modules in the normal way, since even though the
isn't interested. But I doubt the compiler would bother.

This proposal is a little similar to the "Borneo" programming language
proposal for Java,
http://sonic.net/~jddarcy/Borneo/
which was made by one of William Kahan's students. He proposed
annotating every function, specifying which floating point exceptions it
may read or write. In my opinion, that's massive overkill -- it's only
time, you don't. And even when you do care about it, it will only be in
small number of modules.

Since DMD doesn't cache pure functions, it doesn't require any changes
to support this (other than the module statement).

BTW, module(floatingpoint) is just a suggestion. I'm sure someone could
come up with a better term. It could even be really verbose, it's hardly
ever going to be used.

Don.

Does it mean that *any* pure function that has floating-point arithmentic
involved will have to carry an additional status parameter? If so, why make it
explicit?

I've been programming in C++ for years now and have never ever used floating
point exception or customized rounding modes. It may be useful in some cases,
but I believe it is not a feature of frequent use.
That's why I believe the following would be suitable for most programmers:

- whenever you enter a pure function, all floating point settings get saved to
stack and reset to defaults (rounding to nearest, no exceptions etc).
- floating point settings get restored upon leaving the pure function (normally
or via exception)
- user may change rounding modes explicitly inside a pure function, but changes
won't be visible to outer code (see previous point)

```
Mar 13 2009
Don <nospam nospam.com> writes:
```Denis Koroskin wrote:
On Fri, 13 Mar 2009 13:52:08 +0300, Don <nospam nospam.com> wrote:

Consider this code:
double add(double x, double y) {
return x + y;
}
Is this code pure? Is it nothrow?
Technically, it isn't. If you change the floating-point rounding mode
on the processor, you get a different result for exactly the same
inputs. If you change the floating-point traps, it could throw a
floating-point overflow exception.

One Draconian solution would be to say that this code is NOT pure.
This would mean that floating-point could not be used _at all_ in pure
code. Therefore, a template function like this could probably not be
pure:
T add(T)(T x, T y) { return x + y; }
And everything unravels from there.

Another solution would be to simply ignore the floating point flags,
and mark the relevant functions as pure anyway.  That would be a shame
-- DMD goes to a lot of trouble to ensure that they remain valid
(which is one of the reasons why DMD's floating point code is so
slow). We'd be well behind C99 and C++ in support for IEEE
floating-point. I don't like that much.

--- A solution ---

Extend the parametrized module declaration to include something like
module(system, floatingpoint)
as well as
module(system).

This would indicate that the module is floating-point aware. Every
function in that module has two implicit inout parameters: the
floating point status and control registers. This matters ONLY if the
compiler chooses to cache the result of any function in that module
which is marked as 'pure', it must also check the floating-point
status and control, if the function is called from inside any
floatingpoint module. (Most likely, the compiler would simply not
bother caching purity of floating-point modules). This ensures purity
is preserved, _and_ the advanced floating point features remain
available.

Functions inside a floating-point aware module would behave exactly as
they do in normal D.

And now comes the big win. The compiler knows that if a module is not
"floatingpoint", the status flags and rounding DO NOT MATTER. It can
assume the floating-point default situation (round-to-nearest, no
floating point exceptions activated). This allows the compiler to
optimize more aggressively. Of course, it is not required to do so.

Note: The compiler can actually cache calls to pure functions defined
in "floatingpoint" modules in the normal way, since even though the
functions isn't interested. But I doubt the compiler would bother.

This proposal is a little similar to the "Borneo" programming language
proposal for Java,
http://sonic.net/~jddarcy/Borneo/
which was made by one of William Kahan's students. He proposed
annotating every function, specifying which floating point exceptions
it may read or write. In my opinion, that's massive overkill -- it's
the time, you don't. And even when you do care about it, it will only
be in small number of modules.

Since DMD doesn't cache pure functions, it doesn't require any changes
to support this (other than the module statement).

BTW, module(floatingpoint) is just a suggestion. I'm sure someone
could come up with a better term. It could even be really verbose,
it's hardly ever going to be used.

Don.

Does it mean that *any* pure function that has floating-point
arithmentic involved will have to carry an additional status parameter?
If so, why make it explicit?

No. The status parameter is a register on the FPU. You don't have any
choice about it, it's always there. It costs nothing. The proposal is
about specifying a very limited set of circumstances where it is not
allowed to be corrupted. At present, it needs to be preserved
_everywhere_, and that's a big problem for pure functions.

I've been programming in C++ for years now and have never ever used
floating point exception or customized rounding modes. It may be useful
in some cases, but I believe it is not a feature of frequent use.

I agree, that's the whole point!

That's why I believe the following would be suitable for most programmers:

- whenever you enter a pure function, all floating point settings get
saved to stack and reset to defaults (rounding to nearest, no exceptions
etc).

- floating point settings get restored upon leaving the pure function
(normally or via exception)

That's MUCH more complicated than this proposal. And it's very slow.
Also, it doesn't work. It would mean that functions like exp() cannot be
pure -- it's mostly those little ones where the rounding mode actually
matters.

- user may change rounding modes explicitly inside a pure function, but
changes won't be visible to outer code (see previous point)

It seems my proposal wasn't clear enough. In practice, there is NO
CHANGE compared to the way things are now. The ONLY difference is that
this proposal provides the guarantees we need to allow things like
sin(x) to be pure nothrow.
And it has the side-effect that it allows compiler writers a bit more
freedom.
```
Mar 13 2009
"Joel C. Salomon" <joelcsalomon gmail.com> writes:
```Don wrote:
Consider this code:
double add(double x, double y) {
return x + y;
}
Is this code pure? (…)
Technically, it isn't. If you change the floating-point rounding mode on
the processor, you get a different result for exactly the same inputs.

IEEE 754-2008, §4 (Attributes and Rounding), prescribes:
An attribute is logically associated with a program block to
modify its numerical and exception semantics. A user can specify
a constant value for an attribute parameter.

Some attributes have the effect of an implicit parameter to most
individual operations of this standard; language standards shall
specify
— rounding-direction attributes (see 4.3)
and should specify
— alternate exception handling attributes (see 8).

If D offered some way to statically set these attributes on a block to
anything but the default, this could make the code pure. (Unless you
explicitly set the mode to “dynamic”, as normally is the case on x86.)

More likely, though, you’d want to template code like this on the
currently-selected rounding mode.

So, can D become the first language to properly implement this aspect of
754-2008?

—Joel Salomon
```
Mar 13 2009
Don <nospam nospam.com> writes:
```Joel C. Salomon wrote:
Don wrote:
Consider this code:
double add(double x, double y) {
return x + y;
}
Is this code pure? (…)
Technically, it isn't. If you change the floating-point rounding mode on
the processor, you get a different result for exactly the same inputs.

IEEE 754-2008, §4 (Attributes and Rounding), prescribes:
An attribute is logically associated with a program block to
modify its numerical and exception semantics. A user can specify
a constant value for an attribute parameter.

Some attributes have the effect of an implicit parameter to most
individual operations of this standard; language standards shall
specify
— rounding-direction attributes (see 4.3)
and should specify
— alternate exception handling attributes (see 8).

If D offered some way to statically set these attributes on a block to
anything but the default, this could make the code pure. (Unless you
explicitly set the mode to “dynamic”, as normally is the case on x86.)

But there's only 2 modes that are important in practice: 'default' and
'dynamic'. If you have dynamic, any other mode can be done trivially
with a library solution. (A compiler vendor could choose to recognize
the function call as an intrinsic, and perform optimisatio n in a few cases.

More likely, though, you’d want to template code like this on the
currently-selected rounding mode.

So, can D become the first language to properly implement this aspect of
754-2008?

—Joel Salomon

Unless you have a specific use-case in mind?
```
Mar 13 2009
Walter Bright <newshound1 digitalmars.com> writes:
```While it's a good suggestion, I think there's a fundamental problem with
it. Suppose a function in the floatingpoint module calls foo() in a
non-floatingpoint module which calls std.math.sin(x). std.math.sin(x) is
marked as "pure" in a non-floatingpoint module. So, inside foo(), it is
assuming that sin(x) is pure and caches the value, while its caller is
manipulating the rounding mode and making repeated calls to foo()
```
Mar 13 2009
Daniel Keep <daniel.keep.lists gmail.com> writes:
```Walter Bright wrote:
While it's a good suggestion, I think there's a fundamental problem with
it. Suppose a function in the floatingpoint module calls foo() in a
non-floatingpoint module which calls std.math.sin(x). std.math.sin(x) is
marked as "pure" in a non-floatingpoint module. So, inside foo(), it is
assuming that sin(x) is pure and caches the value, while its caller is
manipulating the rounding mode and making repeated calls to foo()

I thought the behaviour was that if you call a function in a
non-floatingpoint module, then that means the rounding mode is fixed at
the "default" and you can't externally change that.

-- Daniel
```
Mar 13 2009
Jason House <jason.james.house gmail.com> writes:
```Walter Bright Wrote:

While it's a good suggestion, I think there's a fundamental problem with
it. Suppose a function in the floatingpoint module calls foo() in a
non-floatingpoint module which calls std.math.sin(x). std.math.sin(x) is
marked as "pure" in a non-floatingpoint module. So, inside foo(), it is
assuming that sin(x) is pure and caches the value, while its caller is
manipulating the rounding mode and making repeated calls to foo()

Maybe I misinderstood, but I thought anything marked module(system) could not
call anythimg that wasn't. I assumed this proposal would mean that
module(floatingpoint) could not call code that wasn't marked the same way.
```
Mar 13 2009
Walter Bright <newshound1 digitalmars.com> writes:
```Jason House wrote:
Walter Bright Wrote:

While it's a good suggestion, I think there's a fundamental problem
with it. Suppose a function in the floatingpoint module calls foo()
in a non-floatingpoint module which calls std.math.sin(x).
std.math.sin(x) is marked as "pure" in a non-floatingpoint module.
So, inside foo(), it is assuming that sin(x) is pure and caches the
value, while its caller is manipulating the rounding mode and
making repeated calls to foo() expecting different answers.

Maybe I misinderstood, but I thought anything marked module(system)
could not call anythimg that wasn't.

No, all system means is no checking is done for safe mode, even if the
compiler switch says to.

I assumed this proposal would
mean that module(floatingpoint) could not call code that wasn't
marked the same way.

If so, it couldn't call any library functions.
```
Mar 13 2009
Don <nospam nospam.com> writes:
```Walter Bright wrote:
While it's a good suggestion, I think there's a fundamental problem with
it. Suppose a function in the floatingpoint module calls foo() in a
non-floatingpoint module which calls std.math.sin(x). std.math.sin(x) is
marked as "pure" in a non-floatingpoint module. So, inside foo(), it is
assuming that sin(x) is pure and caches the value, while its caller is
manipulating the rounding mode and making repeated calls to foo()

That's true, but if you're in a floatingpoint module, and you call a
non-floatingpoint module, it's your responsibility to make sure that the
rounding mode is back to normal. You're saying that you don't care about
the status flags. So it's your own fault if you get surprising results.

The primary use for adjusting the rounding mode is for things like
implementing interval arithmetic. Thus, it's only ever used for small
functions.
```
Mar 13 2009
Walter Bright <newshound1 digitalmars.com> writes:
```Don wrote:
That's true, but if you're in a floatingpoint module, and you call a
non-floatingpoint module, it's your responsibility to make sure that the
rounding mode is back to normal. You're saying that you don't care about
the status flags. So it's your own fault if you get surprising results.

The primary use for adjusting the rounding mode is for things like
implementing interval arithmetic. Thus, it's only ever used for small
functions.

Perhaps we can go with something simpler. If you call a pure function,
then the modes must be set to their defaults.
```
Mar 13 2009
Don <nospam nospam.com> writes:
```Walter Bright wrote:
Don wrote:
That's true, but if you're in a floatingpoint module, and you call a
non-floatingpoint module, it's your responsibility to make sure that
the rounding mode is back to normal. You're saying that you don't care
about the status flags. So it's your own fault if you get surprising
results.

The primary use for adjusting the rounding mode is for things like
implementing interval arithmetic. Thus, it's only ever used for small
functions.

Perhaps we can go with something simpler. If you call a pure function,
then the modes must be set to their defaults.

But then nothing in std.math can be pure.
```
Mar 14 2009
Don <nospam nospam.com> writes:
```Don wrote:
Walter Bright wrote:
Don wrote:
That's true, but if you're in a floatingpoint module, and you call a
non-floatingpoint module, it's your responsibility to make sure that
the rounding mode is back to normal. You're saying that you don't
care about the status flags. So it's your own fault if you get
surprising results.

The primary use for adjusting the rounding mode is for things like
implementing interval arithmetic. Thus, it's only ever used for small
functions.

Perhaps we can go with something simpler. If you call a pure function,
then the modes must be set to their defaults.

But then nothing in std.math can be pure.

My proposal is pretty simple -- I doubt we can come up with anything
simpler that's useful.

To clarify the effect of my proposal:
normal function calls floatingpoint function -- rounding mode respected,
sticky status flags can be ignored.
floatingpoint function calls floatingpoint function -- rounding mode
respected, sticky flags set correctly.
floatingpoint function calls normal function -- rounding mode may be
respected, or you may get default rounding instead (implementation
defined). Sticky flags may not be set in all cases, but none will be
cleared.

So, a floatingpoint function should not make any calls to normal
functions under circumstances in which it needs guaranteed rounding, or
where it relies on the sticky flags. I think that's a managable limitation.

The only other alternative I can see is to require that EVERY function
save the status flags and check the control register before caching any
pure function. Which seems a lot of complexity for the sake of a few
really obscure cases.
```
Mar 14 2009
Walter Bright <newshound1 digitalmars.com> writes:
```Don wrote:
Don wrote:
Walter Bright wrote:
Don wrote:
That's true, but if you're in a floatingpoint module, and you call a
non-floatingpoint module, it's your responsibility to make sure that
the rounding mode is back to normal. You're saying that you don't
care about the status flags. So it's your own fault if you get
surprising results.

The primary use for adjusting the rounding mode is for things like
implementing interval arithmetic. Thus, it's only ever used for
small functions.

Perhaps we can go with something simpler. If you call a pure
function, then the modes must be set to their defaults.

But then nothing in std.math can be pure.

My proposal is pretty simple -- I doubt we can come up with anything
simpler that's useful.

To clarify the effect of my proposal:
normal function calls floatingpoint function -- rounding mode respected,
sticky status flags can be ignored.
floatingpoint function calls floatingpoint function -- rounding mode
respected, sticky flags set correctly.
floatingpoint function calls normal function -- rounding mode may be
respected, or you may get default rounding instead (implementation
defined). Sticky flags may not be set in all cases, but none will be
cleared.

So, a floatingpoint function should not make any calls to normal
functions under circumstances in which it needs guaranteed rounding, or
where it relies on the sticky flags. I think that's a managable limitation.

The only other alternative I can see is to require that EVERY function
save the status flags and check the control register before caching any
pure function. Which seems a lot of complexity for the sake of a few
really obscure cases.

I'm still not seeing the difference between this and saying that for
pure functions, the default modes are used. All the std.math functions
will be pure. How is this different from floatingpoint functions calling
normal functions?
```
Mar 14 2009
Don <nospam nospam.com> writes:
```Walter Bright wrote:
Don wrote:
Don wrote:
Walter Bright wrote:
Don wrote:
That's true, but if you're in a floatingpoint module, and you call
a non-floatingpoint module, it's your responsibility to make sure
that the rounding mode is back to normal. You're saying that you
don't care about the status flags. So it's your own fault if you
get surprising results.

The primary use for adjusting the rounding mode is for things like
implementing interval arithmetic. Thus, it's only ever used for
small functions.

Perhaps we can go with something simpler. If you call a pure
function, then the modes must be set to their defaults.

But then nothing in std.math can be pure.

My proposal is pretty simple -- I doubt we can come up with anything
simpler that's useful.

To clarify the effect of my proposal:
normal function calls floatingpoint function -- rounding mode
respected, sticky status flags can be ignored.
floatingpoint function calls floatingpoint function -- rounding mode
respected, sticky flags set correctly.
floatingpoint function calls normal function -- rounding mode may be
respected, or you may get default rounding instead (implementation
defined). Sticky flags may not be set in all cases, but none will be
cleared.

So, a floatingpoint function should not make any calls to normal
functions under circumstances in which it needs guaranteed rounding,
or where it relies on the sticky flags. I think that's a managable
limitation.

The only other alternative I can see is to require that EVERY function
save the status flags and check the control register before caching
any pure function. Which seems a lot of complexity for the sake of a
few really obscure cases.

I'm still not seeing the difference between this and saying that for
pure functions, the default modes are used. All the std.math functions
will be pure. How is this different from floatingpoint functions calling
normal functions?

The math functions need to work for any rounding mode, not just the
default mode. They also set the status flags correctly. In fact, they
are almost the only functions where this matters!
```
Mar 14 2009
Walter Bright <newshound1 digitalmars.com> writes:
```Don wrote:
The math functions need to work for any rounding mode, not just the
default mode. They also set the status flags correctly. In fact, they
are almost the only functions where this matters!

Ok, then std.math functions cannot be pure in either your or my
proposal, so I'm not seeing the advantage of yours.
```
Mar 14 2009
Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
```Walter Bright wrote:
Don wrote:
The math functions need to work for any rounding mode, not just the
default mode. They also set the status flags correctly. In fact, they
are almost the only functions where this matters!

Ok, then std.math functions cannot be pure in either your or my
proposal, so I'm not seeing the advantage of yours.

Ok, I need to clear some ignorance on my part:

1. Is it usual to change the FP flags in an application multiple times,
or not? (I never changed them, so please be easy on me.)

2. Is one FP flag setup much more often used than others? If so, we
could define pure functions assuming the often-used setup and then some
impure functions using any other setup.

Andrei
```
Mar 14 2009
Walter Bright <newshound1 digitalmars.com> writes:
```Andrei Alexandrescu wrote:
1. Is it usual to change the FP flags in an application multiple times,
or not? (I never changed them, so please be easy on me.)

Given a complex calculation, one might want to know how sensitive the
result is to roundoff error. Calculating this exactly can be a daunting
task, but you can get a rough-and-ready estimate of it by running the
calculation 3 times:

1. with round to nearest (the default)
2. with round up
3. with round down

and comparing the results. I don't really know of any other uses.

2. Is one FP flag setup much more often used than others? If so, we
could define pure functions assuming the often-used setup and then some
impure functions using any other setup.

I'm not ready to face the prospect of two versions of each math function :-(
```
Mar 14 2009
"Joel C. Salomon" <joelcsalomon gmail.com> writes:
```Walter Bright wrote:
Andrei Alexandrescu wrote:
1. Is it usual to change the FP flags in an application multiple
times, or not? (I never changed them, so please be easy on me.)

Given a complex calculation, one might want to know how sensitive the
result is to roundoff error. Calculating this exactly can be a daunting
task, but you can get a rough-and-ready estimate of it by running the
calculation 3 times:

1. with round to nearest (the default)
2. with round up
3. with round down

and comparing the results. I don't really know of any other uses.

Implementing interval arithmetic:

struct interval
{
real low;
real high;

{
return {this.low +!(roundDown) rhs.low,
this.high +!(roundUp) rhs.high};
}
…
}

On the 754r mailing list, the HPC crowd was *very* insistent that static
modes be explicitly in the standard. On some (hypothetical?)
architectures, the different rounding modes might translate to different
opcodes to the FPU rather than adding a “mode-change” instruction.

—Joel Salomon
```
Mar 14 2009
Don <nospam nospam.com> writes:
```Walter Bright wrote:
Don wrote:
The math functions need to work for any rounding mode, not just the
default mode. They also set the status flags correctly. In fact, they
are almost the only functions where this matters!

Ok, then std.math functions cannot be pure in either your or my
proposal, so I'm not seeing the advantage of yours.

They _can_ be pure in my proposal.

Take  real y = exp(real x) as an example.

Actually what happens is:
y = exp(real x, threadlocal int controlflags);
threadlocal int statusflags |= exp_effect_statusflags(real x,

To make exp() pure, we need to get rid of those two thread local variables.
Take the basic version of my proposal: 'pure' functions in floatingpoint
modules cannot be cached at all.
Nonetheless, they can be called by pure functions in normal functions,
and those functions _can_ be cached.
This works because in a normal function, the control flags are defined
to be in the default state. The 'caching only possible when called from
a normal module' thus means that caching only happens when the control
flags are in the default state'. The global variable has been turned
into a compile-time constant.

The status flags are stated to be an undefined state inside non-floating
point modules, so the fact that they keep changing is irrelevant. The
caching system can therefore ignore the fact that the status and control
registers exist.

It's only when a function in a floatingpoint module calls a pure
function which is also in a floatingpoint module, that we have a
guarantee that the caching system will not interfere. This very simple
rule is all we need.
```
Mar 14 2009
Don <nospam nospam.com> writes:
```Don wrote:
Walter Bright wrote:
Don wrote:
The math functions need to work for any rounding mode, not just the
default mode. They also set the status flags correctly. In fact, they
are almost the only functions where this matters!

Ok, then std.math functions cannot be pure in either your or my
proposal, so I'm not seeing the advantage of yours.

They _can_ be pure in my proposal.

Take  real y = exp(real x) as an example.

Actually what happens is:
y = exp(real x, threadlocal int controlflags);
threadlocal int statusflags |= exp_effect_statusflags(real x,

Or to put it another way -- in my proposal, we acknowledge that these
two hidden variables always exist, in every function, and must be
treated correctly for all pure functions.

hidden variables. It allows the compiler to completely ignore the hidden
variables in every module except for a small number of designated
modules. These designated modules are so rare, that they can be dealt
with by disabling caching of pure function return values.

I would expect that apart from std.math, there'd be very few other
advancedfloatingpoint modules, other than those explicitly dealing with
interval arithmetic, and those dealing with particular array operations.

Easy to implement, easy to explain to users (if you want to use the
advanced floating point features, define the
module(advancedfloatingpoint). Any public function in such a module must
restore the rounding mode back to the default before returning, and
before calling any function which is not defined in an
advancedfloatingpoint module. Any call to a function which is not in an
advancedfloatingpoint module leaves the floating point sticky flags in
an undefined state).

Frankly, I think this is one of my best ever language proposals. I'd
considered much more complicated approaches and discovered that they
didn't work. I was very proud to have found such a simple solution.

The difficult bit is convincing people that it actually solves the
problem, and that almost nothing else works. <g>
```
Mar 15 2009
Walter Bright <newshound1 digitalmars.com> writes:
```Let's say we have A which is in a floatingpoint module, B which is in a
non-floatingpoint module and C which is marked pure in a
non-floatingpoint module:

-------------------------
module A(floatingpoint);
void a()
{
set mode;
b();
restore mode;
}
------------------------
module B;
void b()
{
c();
}
-------------------------
module C;
pure real c()
{
...
}
------------------------

Where is the mode for c() getting set back to the default?
```
Mar 15 2009
Don <nospam nospam.com> writes:
```Walter Bright wrote:
Let's say we have A which is in a floatingpoint module, B which is in a
non-floatingpoint module and C which is marked pure in a
non-floatingpoint module:

-------------------------
module A(floatingpoint);
void a()
{
set mode;
b();
restore mode;
}
------------------------
module B;
void b()
{
c();
}
-------------------------
module C;
pure real c()
{
...
}
------------------------

Where is the mode for c() getting set back to the default?

A has called a function in B. B is not a floatingpoint module, so b()
can only be called when the mode is set back to the default. a()
violates this contract, so a() is incorrect. There's nothing wrong with
b() or c(). If a() wants to call b(), it needs to restore the mode
first; or else change b() into another floatingpoint module.

Something interesting about my proposal is that although it is motivated
by the purity problem, that's simply a rule for the compiler -- the
rules for programmers do not involve purity at all.(See my other post).
Do not call _any_ functions in non-floatingpoint modules (pure or not)
without restoring the rounding modes back to the default.
```
Mar 15 2009
Walter Bright <newshound1 digitalmars.com> writes:
```Don wrote:
A has called a function in B. B is not a floatingpoint module, so b()
can only be called when the mode is set back to the default. a()
violates this contract, so a() is incorrect. There's nothing wrong with
b() or c(). If a() wants to call b(), it needs to restore the mode
first; or else change b() into another floatingpoint module.

Ok, this was the missing piece in my understanding of the proposal.

But this requires that std.math either be floatingpoint, or two versions
of it must exist if you want to do change the rounding modes on it.

Something interesting about my proposal is that although it is motivated
by the purity problem, that's simply a rule for the compiler -- the
rules for programmers do not involve purity at all.(See my other post).
Do not call _any_ functions in non-floatingpoint modules (pure or not)
without restoring the rounding modes back to the default.

They could be done in terms of pure - if you call any pure function, the
modes must be set to the default.
```
Mar 15 2009
Sergey Gromov <snake.scaly gmail.com> writes:
```Sun, 15 Mar 2009 13:50:07 -0700, Walter Bright wrote:

Don wrote:
Something interesting about my proposal is that although it is motivated
by the purity problem, that's simply a rule for the compiler -- the
rules for programmers do not involve purity at all.(See my other post).
Do not call _any_ functions in non-floatingpoint modules (pure or not)
without restoring the rounding modes back to the default.

They could be done in terms of pure - if you call any pure function, the
modes must be set to the default.

In Don's proposal, the following is legal:

-------------------------
module A(floatingpoint);
pure void a()
{
set mode;
b();
restore mode;
}
------------------------
module B(floatingpoint);
pure void b()
{
do stuff;
}
-------------------------

because, from compiler's perspective, they're

struct FpuState { mode; sticky; }
pure FpuState a(FpuState s);
pure FpuState b(FpuState s);

and can be actually cached, if compiler so wishes.  IIUC, this is
exactly the use case when you implement range arithmetics.
```
Mar 15 2009
Don <nospam nospam.com> writes:
```Sergey Gromov wrote:
Sun, 15 Mar 2009 13:50:07 -0700, Walter Bright wrote:

Don wrote:
Something interesting about my proposal is that although it is motivated
by the purity problem, that's simply a rule for the compiler -- the
rules for programmers do not involve purity at all.(See my other post).
Do not call _any_ functions in non-floatingpoint modules (pure or not)
without restoring the rounding modes back to the default.

They could be done in terms of pure - if you call any pure function, the
modes must be set to the default.

In Don's proposal, the following is legal:

-------------------------
module A(floatingpoint);
pure void a()
{
set mode;
b();
restore mode;
}
------------------------
module B(floatingpoint);
pure void b()
{
do stuff;
}
-------------------------

because, from compiler's perspective, they're

struct FpuState { mode; sticky; }
pure FpuState a(FpuState s);
pure FpuState b(FpuState s);

and can be actually cached, if compiler so wishes.  IIUC, this is
exactly the use case when you implement range arithmetics.

Hooray! Someone's understood the proposal.

BTW, I think that probably:
module(lowlevelfloatingpoint) would be better than module(floatingpoint).
```
Mar 16 2009
Michel Fortin <michel.fortin michelf.com> writes:
```On 2009-03-16 04:03:01 -0400, Don <nospam nospam.com> said:

In Don's proposal, the following is legal:

-------------------------
module A(floatingpoint);
pure void a()
{
set mode;
b();
restore mode;
}
------------------------
module B(floatingpoint);
pure void b()
{
do stuff;
}
-------------------------

because, from compiler's perspective, they're

struct FpuState { mode; sticky; }
pure FpuState a(FpuState s);
pure FpuState b(FpuState s);

and can be actually cached, if compiler so wishes.  IIUC, this is
exactly the use case when you implement range arithmetics.

Hooray! Someone's understood the proposal.

Interestingly, it's almost the same thing as I proposed earlier in this
thread. In my proposal what you can declare floating-point-flag-neutral
are functions instead of modules, and you wouldn't need a statement to
set and restore the mode as it'd be done automatically when calling a
function declared for a given mode. Said mode could be explicit or
neutral, the latter meaning the function accepts any mode.

My thinking is that forcing flag changes on function boundaries should
make it easier for the compiler than set/restore statements, while
ensuring they're properly scoped.

Here's what it could look like:

pure float a()  // floatmode(round_nearest)  is assumed when omitted
{
// compiler sets the float mode to round down according to b's declaration.
b(); // now can call b with the right mode
// compiler restores float mode to round_nearest (this function's mode)
// calls to b can be easily memoized since b always use the same float mode
}

pure float b() floatmode(round_down)
{
return c(1); // call c with the current settings (becuase c is
float-mode-neutral)
// calls to c can be memoized within the boundaries of b because the round
// mode won't change inside b.
}

pure float c(float) floatmode(neutral)
{
// do stuff in the caller's floating point mode
}

And to set all functions in a module as being float-mode-neutral, do it
like like you'd do for extern(C), or pure:

module std.math;

floatmode(neutral):

// write you functions here.

--
Michel Fortin
michel.fortin michelf.com
http://michelf.com/
```
Mar 16 2009
Don <nospam nospam.com> writes:
```Michel Fortin wrote:
On 2009-03-16 04:03:01 -0400, Don <nospam nospam.com> said:

In Don's proposal, the following is legal:

-------------------------
module A(floatingpoint);
pure void a()
{
set mode;
b();
restore mode;
}
------------------------
module B(floatingpoint);
pure void b()
{
do stuff;
}
-------------------------

because, from compiler's perspective, they're

struct FpuState { mode; sticky; }
pure FpuState a(FpuState s);
pure FpuState b(FpuState s);

and can be actually cached, if compiler so wishes.  IIUC, this is
exactly the use case when you implement range arithmetics.

Hooray! Someone's understood the proposal.

Interestingly, it's almost the same thing as I proposed earlier in this
thread. In my proposal what you can declare floating-point-flag-neutral
are functions instead of modules, and you wouldn't need a statement to
set and restore the mode as it'd be done automatically when calling a
function declared for a given mode. Said mode could be explicit or
neutral, the latter meaning the function accepts any mode.

My thinking is that forcing flag changes on function boundaries should
make it easier for the compiler than set/restore statements, while
ensuring they're properly scoped.

Here's what it could look like:

pure float a()  // floatmode(round_nearest)  is assumed when omitted
{
// compiler sets the float mode to round down according to b's
declaration.
b(); // now can call b with the right mode
// compiler restores float mode to round_nearest (this
function's mode)
// calls to b can be easily memoized since b always use the same
float mode
}

pure float b() floatmode(round_down)
{
return c(1); // call c with the current settings (becuase c is
float-mode-neutral)
// calls to c can be memoized within the boundaries of b because
the round
// mode won't change inside b.
}

pure float c(float) floatmode(neutral)
{
// do stuff in the caller's floating point mode
}

And to set all functions in a module as being float-mode-neutral, do it
like like you'd do for extern(C), or pure:

module std.math;

floatmode(neutral):

// write you functions here.

That requires a new keyord, four new calling conventions, a new name
mangling scheme, compiler insertion of special code, nasty issues with
function pointers, ...
for a feature that almost nobody will ever use. And it doesn't deal with
dynamic rounding mode. And it doesn't solve the problem of the sticky
flags.

It's not the same as my proposal, at all.
```
Mar 16 2009
Michel Fortin <michel.fortin michelf.com> writes:
```On 2009-03-16 08:27:28 -0400, Don <nospam nospam.com> said:

That requires a new keyord, four new calling conventions, a new name
mangling scheme, compiler insertion of special code, nasty issues with
function pointers, ...

Which isn't much different as adding a new extern(x) option, for which
all these problems have been solved.

for a feature that almost nobody will ever use. And it doesn't deal
with    dynamic rounding mode.

Well, isn't it dynamic rounding mode? I ask because you can change the
mode dynamically by calling a function. With floatmode(neutral) you
tell which function support any rounding mode, and with floatmode(x)
you choose which rounding mode to use within a function. It just makes
sure those changes are scoped and limit them to function boundaries.

If you want to evaluate the same function with two rounding modes, just
create a template:

R roundUp(alias a, R)(float arg) floatmode(round_up)
{
return a(arg);
}
R roundDown(alias a, R)(float arg) floatmode(round_down)
{
return a(arg);
}

then call it:

roundUp!(sin)(8);
roundDown!(sin)(8);

And it doesn't solve the problem of the sticky flags.

As for sticky flags, couldn't they be returned by the template when you

struct FloatAndStickyFlags {
this(float, int);
float value;
int sticky_flags;
}
FloatAndStickyFlags roundDownGetStickyFlags(alias a, R)(float arg)
floatmode(round_down)
{
return FloatAndStickyFlags(a(arg), getStickyFlags());
}

--
Michel Fortin
michel.fortin michelf.com
http://michelf.com/
```
Mar 16 2009
Don <nospam nospam.com> writes:
```Michel Fortin wrote:
On 2009-03-16 08:27:28 -0400, Don <nospam nospam.com> said:

That requires a new keyord, four new calling conventions, a new name
mangling scheme, compiler insertion of special code, nasty issues with
function pointers, ...

Which isn't much different as adding a new extern(x) option, for which
all these problems have been solved.

Adding an extern(x) is a colossal language change!

for a feature that almost nobody will ever use. And it doesn't deal
with    dynamic rounding mode.

Well, isn't it dynamic rounding mode? I ask because you can change the
mode dynamically by calling a function. With floatmode(neutral) you tell
which function support any rounding mode, and with floatmode(x) you
choose which rounding mode to use within a function. It just makes sure
those changes are scoped and limit them to function boundaries.

You have to be able to SET the rounding mode. In the dynamic case, you
can't get the compiler to do it automatically for you, without saying
which one you want!

If you want to evaluate the same function with two rounding modes, just
create a template:

R roundUp(alias a, R)(float arg) floatmode(round_up)
{
return a(arg);
}
R roundDown(alias a, R)(float arg) floatmode(round_down)
{
return a(arg);
}

then call it:

roundUp!(sin)(8);
roundDown!(sin)(8);

And it doesn't solve the problem of the sticky flags.

As for sticky flags, couldn't they be returned by the template when you

struct FloatAndStickyFlags {
this(float, int);
float value;
int sticky_flags;
}
FloatAndStickyFlags roundDownGetStickyFlags(alias a, R)(float arg)
floatmode(round_down)
{
return FloatAndStickyFlags(a(arg), getStickyFlags());
}

No, you can't do that without a guarantee that the sticky flags are
preserved. The sticky flags depend on the state of the sticky flags on
entry to the function, and on the behaviour of the function. If you only
cache the return value of the function, the sticky flags will be wrong.
AND even if you did solve that, you'd still need to duplicate every
function in your code, according to whether it return the sticky flags
or not!

I've already posted two elegant, very simple solutions which solve all
the problems.

SOLUTION 1: use the module statement to mark specific modules as
containing functions which need to use the rounding mode and/or sticky
flags, preventing pure functions in those modules from being cached when
called from other similarly marked modules;

SOLUTION 2: provide a call into the runtime to turn caching of pure
functions on and off.

There's absolutely no need for complicated stuff.
```
Mar 16 2009
Don <nospam nospam.com> writes:
```Walter Bright wrote:
Don wrote:
A has called a function in B. B is not a floatingpoint module, so b()
can only be called when the mode is set back to the default. a()
violates this contract, so a() is incorrect. There's nothing wrong
with b() or c(). If a() wants to call b(), it needs to restore the
mode first; or else change b() into another floatingpoint module.

Ok, this was the missing piece in my understanding of the proposal.

But this requires that std.math either be floatingpoint, or two versions
of it must exist if you want to do change the rounding modes on it.

I'm proposing that std.math would be floatingpoint. The docs contain
references to the sticky flags; I just went to a lot of trouble to make
sure that exp() sets the sticky flags correctly.

Something interesting about my proposal is that although it is
motivated by the purity problem, that's simply a rule for the compiler
-- the rules for programmers do not involve purity at all.(See my
other post). Do not call _any_ functions in non-floatingpoint modules
(pure or not) without restoring the rounding modes back to the default.

They could be done in terms of pure - if you call any pure function, the
modes must be set to the default.

(1) If it's totally forbidden to call a pure function with non-default
rounding modes, you need a separate non-pure function for the
non-default case. And unfortunately, it's viral -- you'd need
non-default rounding mode functions for every function. Even though
these functions are the same for pure and non-pure.

(2) You could do it as, 'pure' is cacheable only if the control mode is
set to default, otherwise it's not allowed to be cached. That would
require the complier to check the control mode all the time if it's
implementing caching. And it has to check it on EVERY function, it can't
rely on the signature. Something like:

pure int foo(int x)
{
return (x*0.5 > 6.0)? 1 : 2;
}

depends on the rounding mode! Consider that this might be in a library
-- I just don't think it's viable.

or else have to deal with them explicitly.

(3) There is another option which would actually work. That is to
introduce a secret threadlocal 'must_not_cache_pure_functions' bool
variable.

At any point where the mode is about to change,
must_not_cache_pure_functions must be set to true, and set to false when
the mode is restored.
Likewise, that variable should be set to true whenever you're beginning
a scope where you care about the sticky flags.

_Every_ attempt to use the cached result of a pure function would have
to check that bool before doing anything else.
```
Mar 16 2009
Walter Bright <newshound1 digitalmars.com> writes:
```Don wrote:
Walter Bright wrote:
Don wrote:
A has called a function in B. B is not a floatingpoint module, so b()
can only be called when the mode is set back to the default. a()
violates this contract, so a() is incorrect. There's nothing wrong
with b() or c(). If a() wants to call b(), it needs to restore the
mode first; or else change b() into another floatingpoint module.

Ok, this was the missing piece in my understanding of the proposal.

But this requires that std.math either be floatingpoint, or two
versions of it must exist if you want to do change the rounding modes
on it.

I'm proposing that std.math would be floatingpoint. The docs contain
references to the sticky flags; I just went to a lot of trouble to make
sure that exp() sets the sticky flags correctly.

If std.math was floatingpoint, then its functions could not be pure.

Something interesting about my proposal is that although it is
motivated by the purity problem, that's simply a rule for the
compiler -- the rules for programmers do not involve purity at
all.(See my other post). Do not call _any_ functions in
non-floatingpoint modules (pure or not) without restoring the
rounding modes back to the default.

They could be done in terms of pure - if you call any pure function,
the modes must be set to the default.

(1) If it's totally forbidden to call a pure function with non-default
rounding modes, you need a separate non-pure function for the
non-default case. And unfortunately, it's viral -- you'd need
non-default rounding mode functions for every function. Even though
these functions are the same for pure and non-pure.

I agree that's a problem.

(2) You could do it as, 'pure' is cacheable only if the control mode is
set to default, otherwise it's not allowed to be cached. That would
require the complier to check the control mode all the time if it's
implementing caching. And it has to check it on EVERY function, it can't
rely on the signature. Something like:

pure int foo(int x)
{
return (x*0.5 > 6.0)? 1 : 2;
}

depends on the rounding mode! Consider that this might be in a library
-- I just don't think it's viable.

or else have to deal with them explicitly.

I don't see how your proposal fixes this problem.

(3) There is another option which would actually work. That is to
introduce a secret threadlocal 'must_not_cache_pure_functions' bool
variable.

At any point where the mode is about to change,
must_not_cache_pure_functions must be set to true, and set to false when
the mode is restored.
Likewise, that variable should be set to true whenever you're beginning
a scope where you care about the sticky flags.

_Every_ attempt to use the cached result of a pure function would have
to check that bool before doing anything else.

Maybe a solution is to just have a global compiler flag that says "don't
cache pure functions." Because this problem applies to every pure
function, because pure functions can call other pure functions which
call floating point pure functions.
```
Mar 17 2009
Don <nospam nospam.com> writes:
```Walter Bright wrote:
Don wrote:
Walter Bright wrote:
Don wrote:
A has called a function in B. B is not a floatingpoint module, so
b() can only be called when the mode is set back to the default. a()
violates this contract, so a() is incorrect. There's nothing wrong
with b() or c(). If a() wants to call b(), it needs to restore the
mode first; or else change b() into another floatingpoint module.

Ok, this was the missing piece in my understanding of the proposal.

But this requires that std.math either be floatingpoint, or two
versions of it must exist if you want to do change the rounding modes
on it.

I'm proposing that std.math would be floatingpoint. The docs contain
references to the sticky flags; I just went to a lot of trouble to
make sure that exp() sets the sticky flags correctly.

If std.math was floatingpoint, then its functions could not be pure.

It can certainly be floatingpoint. That's the whole point of the
proposal! Fundamental to the proposal is to relax the definition of pure
from, "the result depends only on the input parameters, and has no
side-effects" to (in x86 terminology)
"the result depends only on the inputs _and on the floating-point
control register_, and has no side-effects _other than on the
floating-point status register_".

Because without this relaxed definition, either
(1) any function with any floating-point operation cannot be pure; or
(2) access to the control and status registers is forbidden (the Java
solution).

With this changed definition of pure, pure is not the same as "trivially
cacheable". I think this is the key point which you haven't understood.

Every function in std.math fulfills the relaxed requirement for purity,
though they are not "trivially cacheable", and don't satisfy the rigid
purity rule.

Relaxed purity isn't much use for the caching optimization (though it
helps for the other benefits of 'pure'). So, I then introduce
module(floatingpoint) as a trick to make almost all pure functions
"trivially cacheable".

All pure functions are trivially cacheable, unless they are defined in a
floatingpoint module, AND are called from another floatingpoint module.

That tiny relaxation of the purity rules is enough to allow things like
interval arithmetic to be implemented. In every other circumstance, the
rigid purity rule can be applied.

So we get three desired outcomes:
(1) floating point can be used in pure functions;
2a. (but only in very limited circumstances);
(3) pure functions can be trivially cached;
3a. (except in very limited circumstances).
```
Mar 17 2009
Sergey Gromov <snake.scaly gmail.com> writes:
```Tue, 17 Mar 2009 03:38:23 -0700, Walter Bright wrote:

Don wrote:
Walter Bright wrote:
Don wrote:
A has called a function in B. B is not a floatingpoint module, so b()
can only be called when the mode is set back to the default. a()
violates this contract, so a() is incorrect. There's nothing wrong
with b() or c(). If a() wants to call b(), it needs to restore the
mode first; or else change b() into another floatingpoint module.

Ok, this was the missing piece in my understanding of the proposal.

But this requires that std.math either be floatingpoint, or two
versions of it must exist if you want to do change the rounding modes
on it.

I'm proposing that std.math would be floatingpoint. The docs contain
references to the sticky flags; I just went to a lot of trouble to make
sure that exp() sets the sticky flags correctly.

If std.math was floatingpoint, then its functions could not be pure.

Let's see.

int foo(int x)
{
return x * x;
}

returns its result in a thread-local register EAX.  You say this
function *can* be pure.

Now this function:

module(floatingpoint)
double bar(double x)
{
return x * x;
}

This function receives its arguments on a thread-local FPU stack, in a
register.  It returns its result on a thread-local FPU stack, in a
register.  You say this function *can not* be pure.  Why?
```
Mar 17 2009
Michel Fortin <michel.fortin michelf.com> writes:
```On 2009-03-14 02:45:27 -0400, Walter Bright <newshound1 digitalmars.com> said:

Don wrote:
That's true, but if you're in a floatingpoint module, and you call a
non-floatingpoint module, it's your responsibility to make sure that
the rounding mode is back to normal. You're saying that you don't care
about the status flags. So it's your own fault if you get surprising
results.

The primary use for adjusting the rounding mode is for things like
implementing interval arithmetic. Thus, it's only ever used for small
functions.

Perhaps we can go with something simpler. If you call a pure function,
then the modes must be set to their defaults.

Just dumping another idea in the suggestion box. Perhpas for all
functions the compiler could store the floating point flags it expects
when called. All functions would expect the default flags, but there
could be a way to attach different flags to a function. Then, the
compiler makes sure that when calling that function the flags are set
properly.

The net result is that if you keep the default on every function,
you'll get the exact same assembler result as today. If you set the
flags on a function, then the caller will set the flags to that before
calling, but it's only necessary if they're different from the flags in
the current function.

And perhaps there should be a way to specify a function that can accept
any flag configuration. In that case, memoizing that function would
require considering the floating point flags as an extra parameter.

Now, tell me, am I using a missile to kill a fly?

--
Michel Fortin
michel.fortin michelf.com
http://michelf.com/
```
Mar 14 2009
Don <nospam nospam.com> writes:
```Michel Fortin wrote:
On 2009-03-14 02:45:27 -0400, Walter Bright <newshound1 digitalmars.com>
said:

Don wrote:
That's true, but if you're in a floatingpoint module, and you call a
non-floatingpoint module, it's your responsibility to make sure that
the rounding mode is back to normal. You're saying that you don't
care about the status flags. So it's your own fault if you get
surprising results.

The primary use for adjusting the rounding mode is for things like
implementing interval arithmetic. Thus, it's only ever used for small
functions.

Perhaps we can go with something simpler. If you call a pure function,
then the modes must be set to their defaults.

Just dumping another idea in the suggestion box. Perhpas for all
functions the compiler could store the floating point flags it expects
when called. All functions would expect the default flags, but there
could be a way to attach different flags to a function. Then, the
compiler makes sure that when calling that function the flags are set
properly.

The net result is that if you keep the default on every function, you'll
get the exact same assembler result as today. If you set the flags on a
function, then the caller will set the flags to that before calling, but
it's only necessary if they're different from the flags in the current
function.

And perhaps there should be a way to specify a function that can accept
any flag configuration. In that case, memoizing that function would
require considering the floating point flags as an extra parameter.

Now, tell me, am I using a missile to kill a fly?

I think so. My proposal is basically like that, except that it asserts
that (1) the state of the flags can be applied on a per-module basis;
and (2) the only situations of practical importance are "default" and
"other".
It's not just the rounding mode (which is an input), it's also the
sticky flags, which are both an input and an output.
```
Mar 14 2009
Philip Miess <philip.Miess yahoo.com> writes:
```Walter Bright wrote:
Don wrote:
That's true, but if you're in a floatingpoint module, and you call a
non-floatingpoint module, it's your responsibility to make sure that
the rounding mode is back to normal. You're saying that you don't care
about the status flags. So it's your own fault if you get surprising
results.

The primary use for adjusting the rounding mode is for things like
implementing interval arithmetic. Thus, it's only ever used for small
functions.

Perhaps we can go with something simpler. If you call a pure function,
then the modes must be set to their defaults.

Walter,
What about a default rounding mode parameter on pure functions that care

like this

pure int sqrt(int x, invariant roundingMode round = default)
{
return x*x;
}

No change to D is necessary to use this.
If you wanted to make it a little easier you could
provide a standard rounding mode class that gets the current rounding mode.
now if don't want to use the default rounding mode you pass the rounding
Mode your using as a parameter.
The function can be cached since it's output only depends on its input.

In summary, if your pure function depends on the rounding mode that
should be a parameter of your function.
What do you think, will that work?

Phil
```
Apr 04 2009
Philip Miess <philip.Miess yahoo.com> writes:
```of course my example makes no sense
try

pure float square(float x, invariant roundingMode round = default)
{
return x*x;
}
in case that helps

Phil
```
Apr 04 2009
"Denis Koroskin" <2korden gmail.com> writes:
```On Sat, 04 Apr 2009 15:19:46 +0400, Philip Miess <philip.Miess yahoo.com> wrote:

of course my example makes no sense
try

pure float square(float x, invariant roundingMode round = default)
{
return x*x;
}
in case that helps

Phil

I don't see roundingMode used anywhere in your example.
```
Apr 04 2009
Philip Miess <philip.Miess yahoo.com> writes:
```Denis Koroskin wrote:
On Sat, 04 Apr 2009 15:19:46 +0400, Philip Miess
<philip.Miess yahoo.com> wrote:

of course my example makes no sense
try

pure float square(float x, invariant roundingMode round = default)
{
return x*x;
}
in case that helps

Phil

I don't see roundingMode used anywhere in your example.

Denis,
the rounding mode is set globally so if you set it before calling a
function it would be used to round the results of a floating point
multiply.
Anyway here is a better example thats much like one I have actually
compiled.

import std.c.fenv;
import std.math;

pure long myround(real x, int round = fegetround() )
{
//fsetround(round);
return lrint(x);
}

int main(char[][] args)
{
long result;
result = round(2.6);
//result is now 3

fsetround(FE_DOWNWARD);
result = round(2.6);
//result is now 2
}

If DMD was memoizing the function it should not think that these both
these calls return the same thing.

Unless of course DMD is too smart/not smart enough and doesn't realize
that the second parameter is important and optimizes it away.

In that case uncomment the call to fesetround with the round parameter
inside myround(). Also you may want to do things like check if the
current round mode is the same as the current round mode and only set it
if it's not. Then you may like to set it back to the original afterwards
to make it act like a pure function.

Phil
```
Apr 10 2009
Don <nospam nospam.com> writes:
```Philip Miess wrote:
Walter Bright wrote:
Don wrote:
That's true, but if you're in a floatingpoint module, and you call a
non-floatingpoint module, it's your responsibility to make sure that
the rounding mode is back to normal. You're saying that you don't
care about the status flags. So it's your own fault if you get
surprising results.

The primary use for adjusting the rounding mode is for things like
implementing interval arithmetic. Thus, it's only ever used for small
functions.

Perhaps we can go with something simpler. If you call a pure function,
then the modes must be set to their defaults.

Walter,
What about a default rounding mode parameter on pure functions that care

like this

pure int sqrt(int x, invariant roundingMode round = default)
{
return x*x;
}

No change to D is necessary to use this.
If you wanted to make it a little easier you could
provide a standard rounding mode class that gets the current rounding mode.
now if don't want to use the default rounding mode you pass the rounding
Mode your using as a parameter.
The function can be cached since it's output only depends on its input.

In summary, if your pure function depends on the rounding mode that
should be a parameter of your function.
What do you think, will that work?

Phil

That's actually a LOT more complicated than my suggestion. Also, it's
not how the rounding modes work.

Aargh. It seems that people don't understand that my solution DOES fix
the problem, and is trivial to implement (< 10 lines of code in the DMD
source, one line change in a couple of standard library modules and
THAT'S ALL). Nobody has come up with any problems with it.

Especially, I don't think Walter understands my proposal yet.

But thanks for replying to the thread, I think it's an important one to
get fixed.
```
Apr 04 2009
Philip Miess <philip.Miess yahoo.com> writes:
```Don wrote:
Philip Miess wrote:
Walter Bright wrote:
Don wrote:
That's true, but if you're in a floatingpoint module, and you call a
non-floatingpoint module, it's your responsibility to make sure that
the rounding mode is back to normal. You're saying that you don't
care about the status flags. So it's your own fault if you get
surprising results.

The primary use for adjusting the rounding mode is for things like
implementing interval arithmetic. Thus, it's only ever used for
small functions.

Perhaps we can go with something simpler. If you call a pure
function, then the modes must be set to their defaults.

Walter,
What about a default rounding mode parameter on pure functions that

like this

pure int sqrt(int x, invariant roundingMode round = default)
{
return x*x;
}

No change to D is necessary to use this.
If you wanted to make it a little easier you could
provide a standard rounding mode class that gets the current rounding
mode.
now if don't want to use the default rounding mode you pass the
rounding Mode your using as a parameter.
The function can be cached since it's output only depends on its input.

In summary, if your pure function depends on the rounding mode that
should be a parameter of your function.
What do you think, will that work?

Phil

That's actually a LOT more complicated than my suggestion. Also, it's
not how the rounding modes work.

Aargh. It seems that people don't understand that my solution DOES fix
the problem, and is trivial to implement (< 10 lines of code in the DMD
source, one line change in a couple of standard library modules and
THAT'S ALL). Nobody has come up with any problems with it.

Especially, I don't think Walter understands my proposal yet.

But thanks for replying to the thread, I think it's an important one to
get fixed.

Don,
here is an improved version of my suggestion

import std.c.fenv;
import std.math;

pure long myround(real x, int round = fegetround() )
{

return lrint(x);
}

int main(char[][] args)
{
long result;
result = myround(2.6);
//result is now 3

fsetround(FE_DOWNWARD);
result = myround(2.6);
//result is now 2
}

I think that this is easier to understand because everything is explicit
and uses the normal syntax.
Only functions that really use the mode need to be parametrized.
It requires no change to the compiler, so its simpler for Walter.
It does require a little more typing for the function writer but not much.
No change is necessary for the use of the function since it
automatically picks up the current rounding mode.
Additionally it does not affect functions that don't need it like your
module setting would.
That way you don't need to segregate all the functions that use the
rounding mode into a separate module to avoid penalizing the others.

To be clear, I do understand your suggestion and believe it would work.
I just prefer not to add new elements to the language when there is a

Phil.
```
Apr 10 2009
Don <nospam nospam.com> writes:
```Philip Miess wrote:
Don wrote:
Philip Miess wrote:
Walter Bright wrote:
Don wrote:
That's true, but if you're in a floatingpoint module, and you call
a non-floatingpoint module, it's your responsibility to make sure
that the rounding mode is back to normal. You're saying that you
don't care about the status flags. So it's your own fault if you
get surprising results.

The primary use for adjusting the rounding mode is for things like
implementing interval arithmetic. Thus, it's only ever used for
small functions.

Perhaps we can go with something simpler. If you call a pure
function, then the modes must be set to their defaults.

Walter,
What about a default rounding mode parameter on pure functions that

like this

pure int sqrt(int x, invariant roundingMode round = default)
{
return x*x;
}

No change to D is necessary to use this.
If you wanted to make it a little easier you could
provide a standard rounding mode class that gets the current rounding
mode.
now if don't want to use the default rounding mode you pass the
rounding Mode your using as a parameter.
The function can be cached since it's output only depends on its input.

In summary, if your pure function depends on the rounding mode that
should be a parameter of your function.
What do you think, will that work?

Phil

That's actually a LOT more complicated than my suggestion. Also, it's
not how the rounding modes work.

Aargh. It seems that people don't understand that my solution DOES fix
the problem, and is trivial to implement (< 10 lines of code in the
DMD source, one line change in a couple of standard library modules
and THAT'S ALL). Nobody has come up with any problems with it.

Especially, I don't think Walter understands my proposal yet.

But thanks for replying to the thread, I think it's an important one
to get fixed.

Don,
here is an improved version of my suggestion

import std.c.fenv;
import std.math;

pure long myround(real x, int round = fegetround() )
{

return lrint(x);
}

int main(char[][] args)
{
long result;
result = myround(2.6);
//result is now 3

fsetround(FE_DOWNWARD);
result = myround(2.6);
//result is now 2
}

I think that this is easier to understand because everything is explicit
and uses the normal syntax.
Only functions that really use the mode need to be parametrized.
It requires no change to the compiler, so its simpler for Walter.

Unfortunately, it _does_ require compiler changes. Here are some issues:
(1) The optimiser needs to discard all those calls to fegetround().
(2) It must not display a warning "parameter 'round' is never used".
(3) Every standard math function gets this extra parameter. So taking
the address of a standard math function will stop working.
(4) The intrinsic functions like sqrt() have signatures which need to
change.
(5) Properties stop working.
(6) This doesn't deal with the problem of the exception sticky flags.
(7) It doesn't deal with the problem of floating point exception handling.

It does require a little more typing for the function writer but not much.
No change is necessary for the use of the function since it
automatically picks up the current rounding mode.
Additionally it does not affect functions that don't need it like your
module setting would.
That way you don't need to segregate all the functions that use the
rounding mode into a separate module to avoid penalizing the others.

But if you look at the functions which need it, you'll find they're
nearly all in the same module anyway (locality of reference). Bear in
mind that there are very few cases where it is ever used. Originally I
had thought of marking every function specifically, but I don't think
that complexity is actually necessary.

To be clear, I do understand your suggestion and believe it would work.
I just prefer not to add new elements to the language when there is a

I can only think of one such possible solution[1]. My first proposal was
the next best thing: the most trivial possible language change, and
actually giving the compiler additional freedom in exchange.

[1] My second proposal was to provide a runtime call to disable caching
of pure functions, but that requires every pure function to check a
global variable to see if caching of pure functions should be ignored.
Unfortunately, this loses many of the optimisation benefits of 'pure',
and the extra optimisation benefits from my first proposal. So I prefer
my first proposal. But if you insist on no language changes at all, this
is the simplest way to do it.
```
Apr 14 2009
"Joel C. Salomon" <joelcsalomon gmail.com> writes:
```Walter Bright wrote:
While it's a good suggestion, I think there's a fundamental problem with
it. Suppose a function in the floatingpoint module calls foo() in a
non-floatingpoint module which calls std.math.sin(x). std.math.sin(x) is
marked as "pure" in a non-floatingpoint module. So, inside foo(), it is
assuming that sin(x) is pure and caches the value, while its caller is
manipulating the rounding mode and making repeated calls to foo()

So in 754-2008 terms, the mode is *always* set to “dynamic”?

—Joel Salomon
```
Mar 14 2009
Christopher Wright <dhasenan gmail.com> writes:
```Don wrote:
Extend the parametrized module declaration to include something like
module(system, floatingpoint)
as well as
module(system).

Is compiler-determined memoization a confirmed feature in a near
release? If not, then it doesn't much matter.
```
Mar 13 2009
bearophile <bearophileHUGS lycos.com> writes:
```Joel C. Salomon:
On the 754r mailing list, the HPC crowd was *very* insistent that static
modes be explicitly in the standard.

Because in technology lot of things aren't determined on technological merits,
but by politics, money and power. Sometimes behind some of the best things you
can find around us there's the insistent work of very few people. For example
we have to say a big THANK YOU to the personal work, political too, of few
people like Knuth, for today having good floating point numbers/operations on
all computers.

Bye,
bearophile
```
Mar 15 2009
Daniel Keep <daniel.keep.lists gmail.com> writes:
```bearophile wrote:
Joel C. Salomon:
On the 754r mailing list, the HPC crowd was *very* insistent that static
modes be explicitly in the standard.

Because in technology lot of things aren't determined on technological merits,
but by politics, money and power. Sometimes behind some of the best things you
can find around us there's the insistent work of very few people. For example
we have to say a big THANK YOU to the personal work, political too, of few
people like Knuth, for today having good floating point numbers/operations on
all computers.

Bye,
bearophile

Interesting article on the history of 754:
http://www.eecs.berkeley.edu/~wkahan/ieee754status/754story.html

-- Daniel
```
Mar 15 2009