## digitalmars.D - Please rid me of this goto

• Andrei Alexandrescu (8/8) Jun 23 2016 So I was looking for an efficient exponentiation implementation, and
• Kagamin (12/12) Jun 23 2016 // Loop invariant: r * (b ^^ e) is the actual result
• H. S. Teoh via Digitalmars-d (19/28) Jun 23 2016 [...]
• Andrei Alexandrescu (4/5) Jun 23 2016 Eh, thank you all who set me straight! I've been in my head for too
• Stefan Koch (4/9) Jun 23 2016 It should be in constfold ...
• H. S. Teoh via Digitalmars-d (9/14) Jun 23 2016 AFAIK, std.math.
• jmh530 (3/11) Jun 23 2016 You're thinking of pow in std.math. I don't see opBinary!("^^")
• H. S. Teoh via Digitalmars-d (9/23) Jun 23 2016 The compiler lowers ^^ to std.math.pow. A decision which has sparked
• ketmar (4/6) Jun 23 2016 the "^^" is very special, 'cause it is the one and only thing
• deadalnix (6/12) Jun 23 2016 It is also has different associativity, and is one of my favorite
• via Digitalmars-d (3/6) Jun 23 2016 binary xor is.... !=
• deadalnix (4/11) Jun 23 2016 3 != 5 is true.
• Patrick Schluter (3/16) Jun 24 2016 He meant logical xor, because binary xor exists (^) and there
• deadalnix (2/19) Jun 24 2016 Still doesn't work.
• Patrick Schluter (8/28) Jun 24 2016 It works if you cast each side of the comparison to boolean
• deadalnix (2/32) Jun 24 2016 So, logical xor isn't != either is it ?
• Patrick Schluter (14/47) Jun 24 2016 It is, I shouldn't have posted the old fashioned (i.e. without
• Seb (9/14) Jun 23 2016 ^^ seems to be lowered here
• Andrei Alexandrescu (5/19) Jun 23 2016 I see, thanks. Both seem to be ripe for the optimized version (they do
• deadalnix (3/10) Jun 23 2016 It is going to depend on the target.
• Seb (36/68) Jun 23 2016 Depends on the target
• deadalnix (1/1) Jun 23 2016 Can you post codegen for each ?
• David Nadlinger (65/67) Jun 23 2016 This is a bad way to benchmark. You are essentially testing the
• David Nadlinger (5/6) Jun 23 2016 (This is of course not intended to be a comprehensive analysis.
• John Colvin (11/78) Jun 23 2016 This is my preferred way of benchmarking as well, people often
• David Nadlinger (5/10) Jun 23 2016 Yep. This is why I was so adamant on using real-world code for my
• Andrei Alexandrescu (5/7) Jun 24 2016 Thanks for this work! I shamelessly copied it into
• Steven Schveighoffer (13/19) Jun 23 2016 Why is this not the same?
• deadalnix (12/21) Jun 23 2016 The inner loop is not a loop, so that's not a problem:
• Timon Gehr (2/10) Jun 23 2016 Unrelated comment: 0^^0 should not overflow.
• Walter Bright (60/61) Jun 23 2016 Paste bin links are ephemeral. The code from the link:
```So I was looking for an efficient exponentiation implementation, and
http://www.stepanovpapers.com/PAM.pdf has a nice discussion starting at
page 54. Stepanov's solution, however, duplicates code, so I eliminated it:

https://dpaste.dzfl.pl/e53acb41885a

The cost is the use of one goto. Can the code be restructured to not
need it?

Thanks,

Andrei
```
Jun 23 2016
```     // Loop invariant: r * (b ^^ e) is the actual result
for (;;)
{
if (e % 2 != 0)
{
r = mul(r, b, overflow);
if (e == 1) return r;
}
b = mul(b, b, overflow);
e /= 2;
}
?
```
Jun 23 2016
```On Thu, Jun 23, 2016 at 01:22:55PM -0400, Andrei Alexandrescu via Digitalmars-d
wrote:
So I was looking for an efficient exponentiation implementation, and
http://www.stepanovpapers.com/PAM.pdf has a nice discussion starting
at page 54. Stepanov's solution, however, duplicates code, so I
eliminated it:

https://dpaste.dzfl.pl/e53acb41885a

The cost is the use of one goto. Can the code be restructured to not
need it?

[...]

I don't understand why that goto is necessary. Or, indeed, why a nested
loop is needed at all. Why can't it be written like this?:

outer: for (;;)
{
if (e % 2 != 0)
{
r = mul(r, b, overflow);
if (e == 1) return r;
}

b = mul(b, b, overflow);
e /= 2;
}

AFAICT, the two possible code paths through the loops are the same.  Am
I missing something??

T

--
Winners never quit, quitters never win. But those who never quit AND never win
are idiots.
```
Jun 23 2016
```On 06/23/2016 01:34 PM, H. S. Teoh via Digitalmars-d wrote:
I don't understand why that goto is necessary.

Eh, thank you all who set me straight! I've been in my head for too
long. So where is the current implementation of "^^"? If it's not as
fast as this, we should replace it. -- Andrei
```
Jun 23 2016
```On Thursday, 23 June 2016 at 18:05:07 UTC, Andrei Alexandrescu
wrote:
On 06/23/2016 01:34 PM, H. S. Teoh via Digitalmars-d wrote:
I don't understand why that goto is necessary.

Eh, thank you all who set me straight! I've been in my head for
too long. So where is the current implementation of "^^"? If
it's not as fast as this, we should replace it. -- Andrei

It should be in constfold ...

Apparently it's in the optimizer as well :)
```
Jun 23 2016
```On Thu, Jun 23, 2016 at 02:05:07PM -0400, Andrei Alexandrescu via Digitalmars-d
wrote:
On 06/23/2016 01:34 PM, H. S. Teoh via Digitalmars-d wrote:
I don't understand why that goto is necessary.

Eh, thank you all who set me straight! I've been in my head for too
long. So where is the current implementation of "^^"?

AFAIK, std.math.

not derail this discussion, which is something that can add immediate
value, to another dreaded philosophical discussion of questionable
consequences. :-P)

T

--
MACINTOSH: Most Applications Crash, If Not, The Operating System Hangs
```
Jun 23 2016
```On Thursday, 23 June 2016 at 18:20:00 UTC, H. S. Teoh wrote:
On Thu, Jun 23, 2016 at 02:05:07PM -0400, Andrei Alexandrescu
via Digitalmars-d wrote:
On 06/23/2016 01:34 PM, H. S. Teoh via Digitalmars-d wrote:
I don't understand why that goto is necessary.

Eh, thank you all who set me straight! I've been in my head
for too long. So where is the current implementation of "^^"?

AFAIK, std.math.

You're thinking of pow in std.math. I don't see opBinary!("^^")
anywhere in there. I only see ^^ in comments.
```
Jun 23 2016
```On Thu, Jun 23, 2016 at 06:35:23PM +0000, jmh530 via Digitalmars-d wrote:
On Thursday, 23 June 2016 at 18:20:00 UTC, H. S. Teoh wrote:
On Thu, Jun 23, 2016 at 02:05:07PM -0400, Andrei Alexandrescu via
Digitalmars-d wrote:
On 06/23/2016 01:34 PM, H. S. Teoh via Digitalmars-d wrote:
I don't understand why that goto is necessary.

Eh, thank you all who set me straight! I've been in my head for
too long. So where is the current implementation of "^^"?

AFAIK, std.math.

You're thinking of pow in std.math. I don't see opBinary!("^^")
anywhere in there. I only see ^^ in comments.

The compiler lowers ^^ to std.math.pow.  A decision which has sparked
some debate in the past.

T

--
"No, John.  I want formats that are actually useful, rather than
over-featured megaliths that address all questions by piling on
ridiculous internal links in forms which are hideously over-complex." --
Simon St. Laurent on xml-dev
```
Jun 23 2016
```On Thursday, 23 June 2016 at 18:35:23 UTC, jmh530 wrote:
You're thinking of pow in std.math. I don't see opBinary!("^^")
anywhere in there. I only see ^^ in comments.

the "^^" is very special, 'cause it is the one and only thing
that compiler recognizes as "function call" in expression
parsing, and generates call to std.math.pow. a freakin' wart.
```
Jun 23 2016
```On Thursday, 23 June 2016 at 18:49:56 UTC, ketmar wrote:
On Thursday, 23 June 2016 at 18:35:23 UTC, jmh530 wrote:
You're thinking of pow in std.math. I don't see
opBinary!("^^") anywhere in there. I only see ^^ in comments.

the "^^" is very special, 'cause it is the one and only thing
that compiler recognizes as "function call" in expression
parsing, and generates call to std.math.pow. a freakin' wart.

It is also has different associativity, and is one of my favorite
gotcha :

| is bitwize or. || is binary or.
& is bitwize and. && is binary and.
^ is bitwize xor. ^^ is... no, never mind.
```
Jun 23 2016
```On Thu, Jun 23, 2016 at 07:11:26PM +0000, deadalnix via Digitalmars-d wrote:
| is bitwize or. || is binary or.
& is bitwize and. && is binary and.
^ is bitwize xor. ^^ is... no, never mind.

binary xor is.... !=

lol
```
Jun 23 2016
```On Thursday, 23 June 2016 at 19:24:54 UTC, via Digitalmars-d
wrote:
On Thu, Jun 23, 2016 at 07:11:26PM +0000, deadalnix via
Digitalmars-d wrote:
| is bitwize or. || is binary or.
& is bitwize and. && is binary and.
^ is bitwize xor. ^^ is... no, never mind.

binary xor is.... !=

lol

3 != 5 is true.
3 binaryxor 5 is false.
```
Jun 23 2016
```On Thursday, 23 June 2016 at 20:01:26 UTC, deadalnix wrote:
On Thursday, 23 June 2016 at 19:24:54 UTC, via Digitalmars-d
wrote:
On Thu, Jun 23, 2016 at 07:11:26PM +0000, deadalnix via
Digitalmars-d wrote:
| is bitwize or. || is binary or.
& is bitwize and. && is binary and.
^ is bitwize xor. ^^ is... no, never mind.

binary xor is.... !=

lol

3 != 5 is true.
3 binaryxor 5 is false.

He meant logical xor, because binary xor exists (^) and there
would be no point to mention an equivalence.
```
Jun 24 2016
```On Friday, 24 June 2016 at 08:40:26 UTC, Patrick Schluter wrote:
On Thursday, 23 June 2016 at 20:01:26 UTC, deadalnix wrote:
On Thursday, 23 June 2016 at 19:24:54 UTC, via Digitalmars-d
wrote:
On Thu, Jun 23, 2016 at 07:11:26PM +0000, deadalnix via
Digitalmars-d wrote:
| is bitwize or. || is binary or.
& is bitwize and. && is binary and.
^ is bitwize xor. ^^ is... no, never mind.

binary xor is.... !=

lol

3 != 5 is true.
3 binaryxor 5 is false.

He meant logical xor, because binary xor exists (^) and there
would be no point to mention an equivalence.

Still doesn't work.
```
Jun 24 2016
```On Friday, 24 June 2016 at 10:11:11 UTC, deadalnix wrote:
On Friday, 24 June 2016 at 08:40:26 UTC, Patrick Schluter wrote:
On Thursday, 23 June 2016 at 20:01:26 UTC, deadalnix wrote:
On Thursday, 23 June 2016 at 19:24:54 UTC, via Digitalmars-d
wrote:
On Thu, Jun 23, 2016 at 07:11:26PM +0000, deadalnix via
Digitalmars-d wrote:
| is bitwize or. || is binary or.
& is bitwize and. && is binary and.
^ is bitwize xor. ^^ is... no, never mind.

binary xor is.... !=

lol

3 != 5 is true.
3 binaryxor 5 is false.

He meant logical xor, because binary xor exists (^) and there
would be no point to mention an equivalence.

Still doesn't work.

It works if you cast each side of the comparison to boolean
values (that's what logical and/or do implicitely, for != it has
to be done explicitely as the operator has never been seen as
logical xor).
in C (sorry I'm not fluent in D yet).
(bool)3 != (bool)5    or  old fashioned    !!3 != !!5 or after De
Morgan transformation !3 == !5.
```
Jun 24 2016
```On Friday, 24 June 2016 at 10:33:43 UTC, Patrick Schluter wrote:
On Friday, 24 June 2016 at 10:11:11 UTC, deadalnix wrote:
On Friday, 24 June 2016 at 08:40:26 UTC, Patrick Schluter
wrote:
On Thursday, 23 June 2016 at 20:01:26 UTC, deadalnix wrote:
On Thursday, 23 June 2016 at 19:24:54 UTC, via Digitalmars-d
wrote:
On Thu, Jun 23, 2016 at 07:11:26PM +0000, deadalnix via
Digitalmars-d wrote:
| is bitwize or. || is binary or.
& is bitwize and. && is binary and.
^ is bitwize xor. ^^ is... no, never mind.

binary xor is.... !=

lol

3 != 5 is true.
3 binaryxor 5 is false.

He meant logical xor, because binary xor exists (^) and there
would be no point to mention an equivalence.

Still doesn't work.

It works if you cast each side of the comparison to boolean
values (that's what logical and/or do implicitely, for != it
has to be done explicitely as the operator has never been seen
as logical xor).
in C (sorry I'm not fluent in D yet).
(bool)3 != (bool)5    or  old fashioned    !!3 != !!5 or after
De Morgan transformation !3 == !5.

So, logical xor isn't != either is it ?
```
Jun 24 2016
```On Friday, 24 June 2016 at 20:34:38 UTC, deadalnix wrote:
On Friday, 24 June 2016 at 10:33:43 UTC, Patrick Schluter wrote:
On Friday, 24 June 2016 at 10:11:11 UTC, deadalnix wrote:
On Friday, 24 June 2016 at 08:40:26 UTC, Patrick Schluter
wrote:
On Thursday, 23 June 2016 at 20:01:26 UTC, deadalnix wrote:
On Thursday, 23 June 2016 at 19:24:54 UTC, via
Digitalmars-d wrote:
On Thu, Jun 23, 2016 at 07:11:26PM +0000, deadalnix via
Digitalmars-d wrote:
| is bitwize or. || is binary or.
& is bitwize and. && is binary and.
^ is bitwize xor. ^^ is... no, never mind.

binary xor is.... !=

lol

3 != 5 is true.
3 binaryxor 5 is false.

He meant logical xor, because binary xor exists (^) and
there would be no point to mention an equivalence.

Still doesn't work.

It works if you cast each side of the comparison to boolean
values (that's what logical and/or do implicitely, for != it
has to be done explicitely as the operator has never been seen
as logical xor).
in C (sorry I'm not fluent in D yet).
(bool)3 != (bool)5    or  old fashioned    !!3 != !!5 or after
De Morgan transformation !3 == !5.

So, logical xor isn't != either is it ?

It is, I shouldn't have posted the old fashioned (i.e. without
C99 bool type) version, it has confused you.

Let's see the truth table for logical xor and !=

a  |  b  |  a ^ b | a != b
---|-----|--------|--------
0  |  0  | 0      | 0
0  |  1  | 1      | 1
1  |  0  | 1      | 1
1  |  1  | 0      | 0

omg, it's the same result, but you have first to reduce each side
of the expression to its canonical boolean value 0 or 1 (false or
true in Java or D).
That's also what the logical && and || do, compared to & and |.
```
Jun 24 2016
```On Thursday, 23 June 2016 at 18:05:07 UTC, Andrei Alexandrescu
wrote:
On 06/23/2016 01:34 PM, H. S. Teoh via Digitalmars-d wrote:
I don't understand why that goto is necessary.

Eh, thank you all who set me straight! I've been in my head for
too long. So where is the current implementation of "^^"? If
it's not as fast as this, we should replace it. -- Andrei

^^ seems to be lowered here
https://github.com/dlang/dmd/blob/9903aba3b1d39bf499a54edbc81c7d9f08711f3e/src/constfold.d#L506

and std.math.pow is defined here:
https://github.com/dlang/phobos/blob/master/std/math.d#L6028

However you should test how it performs against the LLVM
intrinsics available in LDC, e.g.:

llvm.intrinsincs.llvm_pow and llvm_powi (i = integer).
```
Jun 23 2016
```On 06/23/2016 02:37 PM, Seb wrote:
On Thursday, 23 June 2016 at 18:05:07 UTC, Andrei Alexandrescu wrote:
On 06/23/2016 01:34 PM, H. S. Teoh via Digitalmars-d wrote:
I don't understand why that goto is necessary.

Eh, thank you all who set me straight! I've been in my head for too
long. So where is the current implementation of "^^"? If it's not as
fast as this, we should replace it. -- Andrei

^^ seems to be lowered here
https://github.com/dlang/dmd/blob/9903aba3b1d39bf499a54edbc81c7d9f08711f3e/src/constfold.d#L506

and std.math.pow is defined here:
https://github.com/dlang/phobos/blob/master/std/math.d#L6028

I see, thanks. Both seem to be ripe for the optimized version (they do
more testing than necessary).

However you should test how it performs against the LLVM intrinsics
available in LDC, e.g.:

llvm.intrinsincs.llvm_pow and llvm_powi (i = integer).

Cool! Is that a CPU operation?

Andrei
```
Jun 23 2016
```On Thursday, 23 June 2016 at 21:03:08 UTC, Andrei Alexandrescu
wrote:
However you should test how it performs against the LLVM
intrinsics
available in LDC, e.g.:

llvm.intrinsincs.llvm_pow and llvm_powi (i = integer).

Cool! Is that a CPU operation?

Andrei

It is going to depend on the target.
```
Jun 23 2016
```On Thursday, 23 June 2016 at 21:03:08 UTC, Andrei Alexandrescu
wrote:
On 06/23/2016 02:37 PM, Seb wrote:
On Thursday, 23 June 2016 at 18:05:07 UTC, Andrei Alexandrescu
wrote:
On 06/23/2016 01:34 PM, H. S. Teoh via Digitalmars-d wrote:
I don't understand why that goto is necessary.

Eh, thank you all who set me straight! I've been in my head
for too
long. So where is the current implementation of "^^"? If it's
not as
fast as this, we should replace it. -- Andrei

^^ seems to be lowered here
https://github.com/dlang/dmd/blob/9903aba3b1d39bf499a54edbc81c7d9f08711f3e/src/constfold.d#L506

and std.math.pow is defined here:
https://github.com/dlang/phobos/blob/master/std/math.d#L6028

I see, thanks. Both seem to be ripe for the optimized version
(they do more testing than necessary).

However you should test how it performs against the LLVM
intrinsics
available in LDC, e.g.:

llvm.intrinsincs.llvm_pow and llvm_powi (i = integer).

Cool! Is that a CPU operation?

Depends on the target
http://llvm.org/docs/LangRef.html#llvm-pow-intrinsic

You should definitely benchmark between compilers. I gave it a
quick for floating point and integer power (see [1, 2]). Please
take the numbers with _great_ care, but I hope they show the
point about the importance of benchmarking between compilers ;-)

Floating-point
--------------

dmd -inline -release -O -boundscheck=off test_pow.d
-ofbin/dmd/test_pow
ldc -release -O3 -boundscheck=off test_pow.d -ofbin/ldc/test_pow
gdc -finline-functions -frelease -O3 test_pow.d -o
bin/gdc/test_pow

dmd

math.pow 3 secs, 463 ms, 379 μs, and 6 hnsecs
^^       7 secs, 295 ms, 984 μs, and 3 hnsecs

ldc

llvm_pow 125 ms, 673 μs, and 2 hnsecs
^^        6 secs, 380 ms, 104 μs, and 2 hnsecs

gdc

math.pow 0 secs 125 ms
^^       2 secs 2161 ms

Integer power
-------------

dmd -inline -release -O -boundscheck=off test_powi.d
-ofbin/dmd/test_powi
ldc -release -O3 -boundscheck=off test_powi.d -ofbin/ldc/test_powi
gdc -finline-functions -frelease -O3 test_powi.d -o
bin/gdc/test_powi

dmd

math.pow 978 ms, 961 μs, and 8 hnsecs
^^       1 sec, 15 ms, 177 μs, and 6 hnsecs

ldc

math.pow 556 ms, 366 μs, and 8 hnsecs
^^       507 ms, 3 μs, and 1 hnsec

gdc

math.pow 0 secs 125 ms
^^       0 secs 42 ms

 https://github.com/wilzbach/perf-d/blob/master/test_pow.d
 https://github.com/wilzbach/perf-d/blob/master/test_powi.d
```
Jun 23 2016
```Can you post codegen for each ?
```
Jun 23 2016
```On Thursday, 23 June 2016 at 22:08:20 UTC, Seb wrote:
 https://github.com/wilzbach/perf-d/blob/master/test_pow.d
 https://github.com/wilzbach/perf-d/blob/master/test_powi.d

This is a bad way to benchmark. You are essentially testing the
compiler's ability to propagate your constants into the
benchmarking loop/hoisting the code to be benchmarked out of it.

For cross-compiler tests, you should define the candidate
functions in a separate compilation units with an extern(C)
interface to inhibit any optimisations. In this case, your code
could e.g. be altered to:

---
import std.math : pow;
extern(C) long useStd(long a, long b) { return pow(a, b); }
extern(C) long useOp(long a, long b) { return a ^^ b; }
---
extern(C) long useStd(long a, long b);
extern(C) long useOp(long a, long b);

void main(string[] args)
{
import std.datetime: benchmark, Duration;
import std.stdio: writeln, writefln;
import std.conv: to;

long a = 5;
long b = 25;
long l = 0;

void f0() { l += useStd(a, b); }
void f1() { l += useOp(a, b); }

auto rs = benchmark!(f0, f1)(100_000_000);

foreach(j,r;rs)
{
version(GNU)
writefln("%d %d secs %d ms", j, r.seconds(),
r.msecs());
else
writeln(j, " ", r.to!Duration);
}

// prevent any optimization
writeln(l);
}
---
(Keeping track of the sum is of course no longer really
necessary.)

I get the following results:

---
\$ gdc -finline-functions -frelease -O3 -c test1.d
\$ gdc -finline-functions -frelease -O3 test.d test1.o
\$ ./a.out
0 0 secs 620 ms
1 0 secs 647 ms
4939766238266722816
---

---
\$ ldc2 -O3 -release -c test1.d
\$ ldc2 -O3 -release test.d test1.o
\$ ./test
0 418 ms, 895 μs, and 3 hnsecs
1 409 ms, 776 μs, and 1 hnsec
4939766238266722816
---

---
\$ dmd -O -release -inline -c test1.d
\$ dmd -O -release -inline test.d test1.o
0 637 ms, 19 μs, and 9 hnsecs
1 722 ms, 57 μs, and 8 hnsecs
4939766238266722816
---

— David
```
Jun 23 2016
```On Thursday, 23 June 2016 at 23:34:54 UTC, David Nadlinger wrote:
I get the following results:

(This is of course not intended to be a comprehensive analysis.
For example, the codegen for the two functions is actually
identical on GDC and LDC, the relative differences are due to
measurement noise.  — David)
```
Jun 23 2016
```On Thursday, 23 June 2016 at 23:34:54 UTC, David Nadlinger wrote:
On Thursday, 23 June 2016 at 22:08:20 UTC, Seb wrote:
 https://github.com/wilzbach/perf-d/blob/master/test_pow.d
 https://github.com/wilzbach/perf-d/blob/master/test_powi.d

This is a bad way to benchmark. You are essentially testing the
compiler's ability to propagate your constants into the
benchmarking loop/hoisting the code to be benchmarked out of it.

For cross-compiler tests, you should define the candidate
functions in a separate compilation units with an extern(C)
interface to inhibit any optimisations. In this case, your code
could e.g. be altered to:

---
import std.math : pow;
extern(C) long useStd(long a, long b) { return pow(a, b); }
extern(C) long useOp(long a, long b) { return a ^^ b; }
---
extern(C) long useStd(long a, long b);
extern(C) long useOp(long a, long b);

void main(string[] args)
{
import std.datetime: benchmark, Duration;
import std.stdio: writeln, writefln;
import std.conv: to;

long a = 5;
long b = 25;
long l = 0;

void f0() { l += useStd(a, b); }
void f1() { l += useOp(a, b); }

auto rs = benchmark!(f0, f1)(100_000_000);

foreach(j,r;rs)
{
version(GNU)
writefln("%d %d secs %d ms", j, r.seconds(),
r.msecs());
else
writeln(j, " ", r.to!Duration);
}

// prevent any optimization
writeln(l);
}
---
(Keeping track of the sum is of course no longer really
necessary.)

I get the following results:

---
\$ gdc -finline-functions -frelease -O3 -c test1.d
\$ gdc -finline-functions -frelease -O3 test.d test1.o
\$ ./a.out
0 0 secs 620 ms
1 0 secs 647 ms
4939766238266722816
---

---
\$ ldc2 -O3 -release -c test1.d
\$ ldc2 -O3 -release test.d test1.o
\$ ./test
0 418 ms, 895 μs, and 3 hnsecs
1 409 ms, 776 μs, and 1 hnsec
4939766238266722816
---

---
\$ dmd -O -release -inline -c test1.d
\$ dmd -O -release -inline test.d test1.o
0 637 ms, 19 μs, and 9 hnsecs
1 722 ms, 57 μs, and 8 hnsecs
4939766238266722816
---

— David

This is my preferred way of benchmarking as well, people often
tell me cleverer ways but nothing gives me peace of mind like
separate compilation without mangling!

Not wanting to pick on Seb in particular, but I see quite a lot
of poor benchmarking on these forums from different people
(myself included, not these days though I hope).

My biggest bugbear is actually the opposite of what you are point
out here: people doing careful benchmarking and asm-inspection of
small code-fragments in isolation when in reality it is always
going to be inlined and optimised in context.
```
Jun 23 2016
```On Friday, 24 June 2016 at 00:26:52 UTC, John Colvin wrote:
My biggest bugbear is actually the opposite of what you are
point out here: people doing careful benchmarking and
asm-inspection of small code-fragments in isolation when in
reality it is always going to be inlined and optimised in
context.

Yep. This is why I was so adamant on using real-world code for my
performance tracking project, in case anybody has heard me going

— David
```
Jun 23 2016    Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
```On 06/23/2016 06:08 PM, Seb wrote:
 https://github.com/wilzbach/perf-d/blob/master/test_pow.d
 https://github.com/wilzbach/perf-d/blob/master/test_powi.d

Thanks for this work! I shamelessly copied it into
https://issues.dlang.org/show_bug.cgi?id=16200 to measure the proposed
pow implementation for double/uint. Any takers of the issue? Thanks! --
Andrei
```
Jun 24 2016
```On 6/23/16 1:22 PM, Andrei Alexandrescu wrote:
So I was looking for an efficient exponentiation implementation, and
http://www.stepanovpapers.com/PAM.pdf has a nice discussion starting at
page 54. Stepanov's solution, however, duplicates code, so I eliminated it:

https://dpaste.dzfl.pl/e53acb41885a

The cost is the use of one goto. Can the code be restructured to not
need it?

Why is this not the same?

for(;;) // previously outer:
{
if (e % 2 != 0)
{
r = mul(r, b, overflow);
if (e == 1) return r;
}
b = mul(b, b, overflow);
e /= 2;
}

-Steve
```
Jun 23 2016
```On Thursday, 23 June 2016 at 17:22:55 UTC, Andrei Alexandrescu
wrote:
So I was looking for an efficient exponentiation
implementation, and http://www.stepanovpapers.com/PAM.pdf has a
nice discussion starting at page 54. Stepanov's solution,
however, duplicates code, so I eliminated it:

https://dpaste.dzfl.pl/e53acb41885a

The cost is the use of one goto. Can the code be restructured
to not need it?

Thanks,

Andrei

The inner loop is not a loop, so that's not a problem:

// Loop invariant: r * (b ^^ e) is the actual result
while(true) {
if (e % 2 != 0) {
r = mul(r, b, overflow);
if (e == 1) return r;
}

b = mul(b, b, overflow);
e /= 2;
}
```
Jun 23 2016
```On 23.06.2016 19:22, Andrei Alexandrescu wrote:
So I was looking for an efficient exponentiation implementation, and
http://www.stepanovpapers.com/PAM.pdf has a nice discussion starting at
page 54. Stepanov's solution, however, duplicates code, so I eliminated it:

https://dpaste.dzfl.pl/e53acb41885a

The cost is the use of one goto. Can the code be restructured to not
need it?

Thanks,

Andrei

Unrelated comment: 0^^0 should not overflow.
```
Jun 23 2016
```On Thu, Jun 23, 2016 at 08:59:07PM +0200, Timon Gehr via Digitalmars-d wrote:
[...]
Unrelated comment: 0^^0 should not overflow.

Says who?

T

--
```
Jun 23 2016
```On 23.06.2016 21:04, H. S. Teoh via Digitalmars-d wrote:
On Thu, Jun 23, 2016 at 08:59:07PM +0200, Timon Gehr via Digitalmars-d wrote:
[...]
Unrelated comment: 0^^0 should not overflow.

Says who?

...

According to your link, it is the 'Mathematician' who says that 0^^0 = 1.
This should be even less controversial in computer science circles. It's
the choice that works for e.g. combinatorics and type theory.

http://mathforum.org/dr.math/faq/faq.0.to.0.power.html

I would not want to use a power function that does not work for (0,0).
```
Jun 23 2016
```On 06/23/2016 02:59 PM, Timon Gehr wrote:
On 23.06.2016 19:22, Andrei Alexandrescu wrote:
So I was looking for an efficient exponentiation implementation, and
http://www.stepanovpapers.com/PAM.pdf has a nice discussion starting at
page 54. Stepanov's solution, however, duplicates code, so I
eliminated it:

https://dpaste.dzfl.pl/e53acb41885a

The cost is the use of one goto. Can the code be restructured to not
need it?

Thanks,

Andrei

Unrelated comment: 0^^0 should not overflow.

Why not? -- Andrei
```
Jun 23 2016
```On 23.06.2016 23:04, Andrei Alexandrescu wrote:
On 06/23/2016 02:59 PM, Timon Gehr wrote:
On 23.06.2016 19:22, Andrei Alexandrescu wrote:
So I was looking for an efficient exponentiation implementation, and
http://www.stepanovpapers.com/PAM.pdf has a nice discussion starting at
page 54. Stepanov's solution, however, duplicates code, so I
eliminated it:

https://dpaste.dzfl.pl/e53acb41885a

The cost is the use of one goto. Can the code be restructured to not
need it?

Thanks,

Andrei

Unrelated comment: 0^^0 should not overflow.

Why not? -- Andrei

Because 0^^0 = 1, and 1 is representable.

E.g. n^^m counts the number of functions from an m-set to an n-set, and
there is exactly one function from {} to {}.
```
Jun 23 2016
```On Thu, Jun 23, 2016 at 11:30:51PM +0200, Timon Gehr via Digitalmars-d wrote:
On 23.06.2016 23:04, Andrei Alexandrescu wrote:
On 06/23/2016 02:59 PM, Timon Gehr wrote:
On 23.06.2016 19:22, Andrei Alexandrescu wrote:
So I was looking for an efficient exponentiation implementation,
and http://www.stepanovpapers.com/PAM.pdf has a nice discussion
starting at page 54. Stepanov's solution, however, duplicates
code, so I eliminated it:

https://dpaste.dzfl.pl/e53acb41885a

The cost is the use of one goto. Can the code be restructured to
not need it?

Thanks,

Andrei

Unrelated comment: 0^^0 should not overflow.

Why not? -- Andrei

Because 0^^0 = 1, and 1 is representable.

E.g. n^^m counts the number of functions from an m-set to an n-set,
and there is exactly one function from {} to {}.

This argument only works for discrete sets.  If n and m are reals, you'd
need a different argument.

T

--
Verbing weirds language. -- Calvin (& Hobbes)
```
Jun 23 2016
```On Thursday, 23 June 2016 at 22:53:59 UTC, H. S. Teoh wrote:
This argument only works for discrete sets.  If n and m are
reals, you'd need a different argument.

For reals, you can use limits/continuation as argument.
```
Jun 23 2016
```On Thu, Jun 23, 2016 at 11:14:08PM +0000, deadalnix via Digitalmars-d wrote:
On Thursday, 23 June 2016 at 22:53:59 UTC, H. S. Teoh wrote:
This argument only works for discrete sets.  If n and m are reals,
you'd need a different argument.

For reals, you can use limits/continuation as argument.

The problem with that is that you get two different answers:

lim  x^y = 0
x->0

but:

lim  x^y = 1
y->0

So it's not clear what ought to happen when both x and y approach 0.

The problem is that the 2-variable function f(x,y)=x^y has a
discontinuity at (0,0). So approaching it from some directions give 1,
approaching it from other directions give 0, and it's not clear why one
should choose the value given by one direction above another.

Mathematicians arbitrarily chose its value to be 1 based on arguments
like the one Timon gave, but it's an arbitrary choice, not something
that the mathematics itself suggest.

T

--
Try to keep an open mind, but not so open your brain falls out. -- theboz
```
Jun 23 2016
```On 24.06.2016 01:18, H. S. Teoh via Digitalmars-d wrote:
On Thu, Jun 23, 2016 at 11:14:08PM +0000, deadalnix via Digitalmars-d wrote:
On Thursday, 23 June 2016 at 22:53:59 UTC, H. S. Teoh wrote:
This argument only works for discrete sets.  If n and m are reals,
you'd need a different argument.

For reals, you can use limits/continuation as argument.

The problem with that is that you get two different answers:

lim  x^y = 0
x->0

but:

lim  x^y = 1
y->0
...

That makes no sense. You want lim[x->0] x^0 and lim[y->0] 0^y.

So it's not clear what ought to happen when both x and y approach 0.

The problem is that the 2-variable function f(x,y)=x^y has a
discontinuity at (0,0). So approaching it from some directions give 1,
approaching it from other directions give 0, and it's not clear why one
should choose the value given by one direction above another.
...

It is /perfectly/ clear. What makes you so invested in the continuity of
the function 0^y? It's just not important.

Mathematicians arbitrarily chose its value to be 1 based on arguments
like the one Timon gave, but it's an arbitrary choice,

It is absolutely /not/ arbitrary.

not something that the mathematics itself suggest.
...

What kind of standard is that? 'The mathematics itself' does not suggest
that we do not define 2+2=5 while keeping all other function values
intact either, and it is still obvious to everyone that it would be a
bad idea to give such succinct notation to such an unimportant function.
```
Jun 23 2016
```On Fri, Jun 24, 2016 at 01:58:01AM +0200, Timon Gehr via Digitalmars-d wrote:
On 24.06.2016 01:18, H. S. Teoh via Digitalmars-d wrote:
On Thu, Jun 23, 2016 at 11:14:08PM +0000, deadalnix via Digitalmars-d wrote:
On Thursday, 23 June 2016 at 22:53:59 UTC, H. S. Teoh wrote:
This argument only works for discrete sets.  If n and m are reals,
you'd need a different argument.

For reals, you can use limits/continuation as argument.

The problem with that is that you get two different answers:

lim  x^y = 0
x->0

but:

lim  x^y = 1
y->0
...

That makes no sense. You want lim[x->0] x^0 and lim[y->0] 0^y.

Sorry, I was attempting to write exactly that but with ASCII art. No
disagreement there.

So it's not clear what ought to happen when both x and y approach 0.

The problem is that the 2-variable function f(x,y)=x^y has a
discontinuity at (0,0). So approaching it from some directions give
1, approaching it from other directions give 0, and it's not clear
why one should choose the value given by one direction above
another.  ...

It is /perfectly/ clear. What makes you so invested in the continuity
of the function 0^y? It's just not important.

I'm not.  I'm just pointing out that x^y has an *essential*
discontinuity at (0,0), and the choice 0^0 = 1 is a matter of
convention. A widely-adopted convention, but a convention nonetheless.
It does not change the fact that (0,0) is an essential discontinuity of
x^y.

[...]
not something that the mathematics itself suggest.
...

What kind of standard is that? 'The mathematics itself' does not
suggest that we do not define 2+2=5 while keeping all other function
values intact either, and it is still obvious to everyone that it
would be a bad idea to give such succinct notation to such an
unimportant function.

Nobody said anything about defining 2+2=5.  What function are you
talking about that would require 2+2=5?

It's clear that 0^0=1 is a choice made by convenience, no doubt made to
simplify the statement of certain theorems, but the fact remains that
(0,0) is a discontinous point of x^y. At best it is undefined, since
it's an essential discontinuity, just like x=0 is an essential
discontinuity of 1/x.  What *ought* to be the value of 0^0 is far from
clear; it was a controversy that raged throughout the 19th century and
only in recent decades consensus began to build around 0^0=1.

T

--
Let X be the set not defined by this sentence...
```
Jun 23 2016
```On 24.06.2016 02:14, H. S. Teoh via Digitalmars-d wrote:
On Fri, Jun 24, 2016 at 01:58:01AM +0200, Timon Gehr via Digitalmars-d wrote:
On 24.06.2016 01:18, H. S. Teoh via Digitalmars-d wrote:
On Thu, Jun 23, 2016 at 11:14:08PM +0000, deadalnix via Digitalmars-d wrote:
On Thursday, 23 June 2016 at 22:53:59 UTC, H. S. Teoh wrote:
This argument only works for discrete sets.  If n and m are reals,
you'd need a different argument.

For reals, you can use limits/continuation as argument.

The problem with that is that you get two different answers:

lim  x^y = 0
x->0

but:

lim  x^y = 1
y->0
...

That makes no sense. You want lim[x->0] x^0 and lim[y->0] 0^y.

Sorry, I was attempting to write exactly that but with ASCII art. No
disagreement there.

So it's not clear what ought to happen when both x and y approach 0.

The problem is that the 2-variable function f(x,y)=x^y has a
discontinuity at (0,0). So approaching it from some directions give
1, approaching it from other directions give 0, and it's not clear
why one should choose the value given by one direction above
another.  ...

It is /perfectly/ clear. What makes you so invested in the continuity
of the function 0^y? It's just not important.

I'm not.  I'm just pointing out that x^y has an *essential*
discontinuity at (0,0),

Which just means that there is no limiting value for that point.

and the choice 0^0 = 1 is a matter of
convention. A widely-adopted convention, but a convention nonetheless.
It does not change the fact that (0,0) is an essential discontinuity of
x^y.
...

notation is convention, but not all aspects of notations are arbitrary.

[...]
not something that the mathematics itself suggest.
...

What kind of standard is that? 'The mathematics itself' does not
suggest that we do not define 2+2=5 while keeping all other function
values intact either, and it is still obvious to everyone that it
would be a bad idea to give such succinct notation to such an
unimportant function.

Nobody said anything about defining 2+2=5.  What function are you
talking about that would require 2+2=5?
...

There exists a function that agrees with + on all values except (2,2),
where it is 5. If we call that function '+', we can still do algebra on
real numbers by special casing the point (2,2) in most theorems, but we
don't want to.

It's clear that 0^0=1 is a choice made by convenience, no doubt made to
simplify the statement of certain theorems, but the fact remains that
(0,0) is a discontinous point of x^y.

Yup.

At best it is undefined, since it's an essential discontinuity,

Nope. x=0 is an essential discontinuity of sgn(x) too, yet sgn(0)=0.

just like x=0 is an essential discontinuity of 1/x.

That is not why 1/0 is left undefined on the real numbers. It's a
convention too, and it is not arbitrary.

What *ought* to be the value of 0^0 is far from
clear; it was a controversy that raged throughout the 19th century and
only in recent decades consensus began to build around 0^0=1.
...

This is the 21st century and it has become clear what 0^0 should be.
There is no value in discrediting the convention by calling it
'arbitrary' when it is not.
```
Jun 23 2016
```On Friday, 24 June 2016 at 01:49:27 UTC, Timon Gehr wrote:
On 24.06.2016 02:14, H. S. Teoh via Digitalmars-d wrote:
On Fri, Jun 24, 2016 at 01:58:01AM +0200, Timon Gehr via
Digitalmars-d wrote:
On 24.06.2016 01:18, H. S. Teoh via Digitalmars-d wrote:
On Thu, Jun 23, 2016 at 11:14:08PM +0000, deadalnix via
Digitalmars-d wrote:
On Thursday, 23 June 2016 at 22:53:59 UTC, H. S. Teoh wrote:
This argument only works for discrete sets.  If n and m
are reals,
you'd need a different argument.

For reals, you can use limits/continuation as argument.

The problem with that is that you get two different answers:

lim  x^y = 0
x->0

but:

lim  x^y = 1
y->0
...

That makes no sense. You want lim[x->0] x^0 and lim[y->0] 0^y.

Sorry, I was attempting to write exactly that but with ASCII
art. No
disagreement there.

So it's not clear what ought to happen when both x and y
approach 0.

The problem is that the 2-variable function f(x,y)=x^y has a
discontinuity at (0,0). So approaching it from some
directions give
1, approaching it from other directions give 0, and it's not
clear
why one should choose the value given by one direction above
another.  ...

It is /perfectly/ clear. What makes you so invested in the
continuity
of the function 0^y? It's just not important.

I'm not.  I'm just pointing out that x^y has an *essential*
discontinuity at (0,0),

Which just means that there is no limiting value for that point.

and the choice 0^0 = 1 is a matter of
convention. A widely-adopted convention, but a convention
nonetheless.
It does not change the fact that (0,0) is an essential
discontinuity of
x^y.
...

All notation is convention, but not all aspects of notations
are arbitrary.

[...]
not something that the mathematics itself suggest.
...

What kind of standard is that? 'The mathematics itself' does
not
suggest that we do not define 2+2=5 while keeping all other
function
values intact either, and it is still obvious to everyone
that it
would be a bad idea to give such succinct notation to such an
unimportant function.

Nobody said anything about defining 2+2=5.  What function are
you
talking about that would require 2+2=5?
...

There exists a function that agrees with + on all values except
(2,2), where it is 5. If we call that function '+', we can
still do algebra on real numbers by special casing the point
(2,2) in most theorems, but we don't want to.

It's clear that 0^0=1 is a choice made by convenience, no
simplify the statement of certain theorems, but the fact
remains that
(0,0) is a discontinous point of x^y.

Yup.

At best it is undefined, since it's an essential discontinuity,

Nope. x=0 is an essential discontinuity of sgn(x) too, yet
sgn(0)=0.

just like x=0 is an essential discontinuity of 1/x.

That is not why 1/0 is left undefined on the real numbers. It's
a convention too, and it is not arbitrary.

What *ought* to be the value of 0^0 is far from
clear; it was a controversy that raged throughout the 19th
century and
only in recent decades consensus began to build around 0^0=1.
...

This is the 21st century and it has become clear what 0^0
should be. There is no value in discrediting the convention by
calling it 'arbitrary' when it is not.

You do realize that e^(-1/t)^t is a counter example?

e^(-1/t) -> 0 as t -> 0
t -> 0 as t -> 0

but e^(-1/t)^t does not -> 1 as t-> 0, which is obvious since
it/s 1/e.

So, We can define 0^0 = 1 and maybe that is better than 2, but it
is arbitrary in the sense that it's a definition. It may bear
fruit but it is not necessarily meaningful.

Suppose a person is running some numerical simulation that
happens to be computing an a value for an equation that is
essentially based on the above. Because of numerical rounding,
lets suppose we end up with t = 0. At that moment the calculation
goes haywire and destroys a spacecraft with 3 people on it. Those
people have families and young children who end up growing up
without a father and turn to crime....

All because of some idiot who wanted to break the laws of physics
by arbitrarily defining something to be true when it is not.

Seriously, your logic has severe consequences.  Just because 0^0
looks like 1, it is far more complex than that and your tiny
little brain can't comprehend them. First law of mathematics:
Please don't kill people, computers can't program themselves and
grants can't write themselves, nor can the IRS collect from dead
people.

Please, Just say no to 0^0 = 1!

Luckily, it was only a stimulation and no one got hurt... this
time!
```
Jun 23 2016
```On 24.06.2016 04:36, Smoke Adams wrote:
....

You do realize that e^(-1/t)^t is a counter example?

e^(-1/t) -> 0 as t -> 0
t -> 0 as t -> 0
....

That's not a counterexample to anything I said. ^ is discontinuous at
(0,0) and indeed, you can force the limit to an arbitrary value by
choosing the two expressions the right way. That's clear.

but e^(-1/t)^t does not -> 1 as t-> 0, which is obvious since it/s 1/e.

So, We can define 0^0 = 1 and maybe that is better than 2, but it is
arbitrary in the sense that it's a definition. It may bear fruit but it
is not necessarily meaningful.
...

It's meaningful in those cases where you want to use 0^0, and otherwise
just don't use it.

Suppose a person is running some numerical simulation that happens to be
computing an a value for an equation that is essentially based on the
above.

Then the numerical simulation is inherently broken. a^b takes on any
arbitrary non-negative value in an arbitrary small region around (0,0)
and is undefined at many points in such a region. (BTW: It would be fine
with me if 0.0^^0.0 was NaN -- that's a completely different case than
the one at hand: pow on integers.)

... break the laws of physics by
arbitrarily defining something to be true when it is not.
...

Utter nonsense. (Also note that the 'laws of physics' typically give
rise to piecewise analytic functions, and if you only consider analytic
functions, 0 ^ 0 = 1 is actually the right answer.)
```
Jun 23 2016
```On 24.06.2016 05:22, Timon Gehr wrote:
... break the laws of physics by
arbitrarily defining something to be true when it is not.
...

Utter nonsense. (Also note that the 'laws of physics' typically give
rise to piecewise analytic functions, and if you only consider analytic
functions, 0 ^ 0 = 1 is actually the right answer.)

Also, if you happen to know of any case where Physics essentially
depends on x^y where both x and y vary around 0, please do enlighten us.
```
Jun 23 2016
```On Fri, Jun 24, 2016 at 05:22:11AM +0200, Timon Gehr via Digitalmars-d wrote:
[...]
(BTW: It would be fine with me if 0.0^^0.0 was NaN -- that's a
completely different case than the one at hand: pow on integers.)

That's even worse. So 0^0=1 if 0 is regarded as an integer, and 0^0=NaN
if 0 is regarded as a real?  That's even more horrible than my
(admittedly not very good) argument that 0^0 should not be 1.

[...]
(Also note that the 'laws of physics' typically give rise to piecewise
analytic functions, and if you only consider analytic functions, 0 ^ 0
= 1 is actually the right answer.)

analytic functions of the form f(x)^g(x) where f(0)=g(0)=0, for which
the limit as x->0 can be made to approach whatever number you choose. If
you happen to be working with a function of that form, 0^0=1 may not be

T

--
The fact that anyone still uses AOL shows that even the presence of options
doesn't stop some people from picking the pessimal one. - Mike Ellis
```
Jun 23 2016
```On 24.06.2016 08:11, H. S. Teoh via Digitalmars-d wrote:
On Fri, Jun 24, 2016 at 05:22:11AM +0200, Timon Gehr via Digitalmars-d wrote:
[...]
(BTW: It would be fine with me if 0.0^^0.0 was NaN -- that's a
completely different case than the one at hand: pow on integers.)

That's even worse. So 0^0=1 if 0 is regarded as an integer, and 0^0=NaN
if 0 is regarded as a real?

A 'double'.

That's even more horrible than my
(admittedly not very good) argument that 0^0 should not be 1.
...

0.0^^0 should certainly be 1, so I do think it makes sense to have
0.0^^0.0 = 1 too.

[...]
(Also note that the 'laws of physics' typically give rise to piecewise
analytic functions, and if you only consider analytic functions, 0 ^ 0
= 1 is actually the right answer.)

rinconmatematico.com/foros/index.php?action=dlattach;topic=61041.0;attach=10948
```
Jun 24 2016    "Smoke" Adams <SA2432 gmail.com> writes:
```On Friday, 24 June 2016 at 03:22:11 UTC, Timon Gehr wrote:
On 24.06.2016 04:36, Smoke Adams wrote:
....

You do realize that e^(-1/t)^t is a counter example?

e^(-1/t) -> 0 as t -> 0
t -> 0 as t -> 0
....

That's not a counterexample to anything I said. ^ is
discontinuous at (0,0) and indeed, you can force the limit to
an arbitrary value by choosing the two expressions the right
way. That's clear.

but e^(-1/t)^t does not -> 1 as t-> 0, which is obvious since
it/s 1/e.

So, We can define 0^0 = 1 and maybe that is better than 2, but
it is
arbitrary in the sense that it's a definition. It may bear
fruit but it
is not necessarily meaningful.
...

It's meaningful in those cases where you want to use 0^0, and
otherwise just don't use it.

Suppose a person is running some numerical simulation that
happens to be
computing an a value for an equation that is essentially based
on the
above.

Then the numerical simulation is inherently broken. a^b takes
on any arbitrary non-negative value in an arbitrary small
region around (0,0) and is undefined at many points in such a
region. (BTW: It would be fine with me if 0.0^^0.0 was NaN --
that's a completely different case than the one at hand: pow on
integers.)

... break the laws of physics by
arbitrarily defining something to be true when it is not.
...

Utter nonsense. (Also note that the 'laws of physics' typically
give rise to piecewise analytic functions, and if you only
consider analytic functions, 0 ^ 0 = 1 is actually the right

Please, it seems you only know enough about math and physics to
get you into trouble.

1. do you realize that the "equations" of man are only
approximations to the equations of life? Or do you really think
gravity behaves exactly as 1/r^2?

Also, what about when r = 0? how do you propose to define it? Set
it to 0? 1? -infinity? What's the solution here? Surely there is
a non-arbitrary solution?

2. You hold way to much faith in man.  Math and Science are
tools, tools are created, manipulated, updated, retooled, and
used to make other tools.  Their only use is how well they work.
They work quite well but we cannot prove how accurate they are.
If you know about Godel, you would know that we can't even prove
that we can't prove anything(which itself is meaningless).

3. What you are arguing is the convenience factor. Sometimes also
known as the lazy factor. Just admit it and stop trying to prove
that somehow god has made this choice.

4. An equation in math is just one form of the same abstract
mathematical construct. Simply put: f(x) = x = sqrt(x)^2 =
inv(cos(x)) = .... All these types of identities can substituted
in an equation to change it's form.

5. All forms are different. Obviously x != sqrt(x)^2 in all
cases. Hence the transformations using identities are not
actually equal in all cases: sqrt(x)^2 = x only for x >= 0.

6. Symbols are not absolute. They are mans creation and agreement
on their meaning so they can be useful. x is just chicken scratch
until everyone agrees on how to interpret it.

7. And this is where the problem lies. Assumptions can be deadly.
using integers only is nonsense because you have made the
assumption that only integers will ever be used and also that the
integers are not related to the real number case. Both of these
are provably false.

e.g., Lets suppose that, for integers, we have PowI(x,y) and for
reals, PowR(x,y).

Someone does

if (isInteger(x) && isInteger(y)
return PowI(x,y);
else
return PowR(x,y);

Why they might do this is immaterial.. I just did it, so it is
possible. Maybe PowI is faster because of some type of new
algorithm someone came up with.

8. The truth is the correct answer. The truth of the matter is,
0^0 is undefined. Accept it, just because you don't like the
answer doesn't give you the right to change it. This is the same
belief many have in Santa Clause, the Tooth Fairy, Jesus Christ,
etc. No proof they exist but they belief it and change the truth
to match the result(which is anti-science, anti-math, and
anti-logic, anti-progress, and anti-truth and ultimately
anti-god).

It's one thing to argue the convenience factor, and this is ok
but it should be well understood by everyone. The other is to
argue that what you are saying is fact, and this is clearly
demonstrably false. Arguing a special case, such as using only
integers, is just as false because F => F is a false statement.

9. If you are using some other kind of logical system than the
standard one that all of math and science is built on, then
forgive me for being wrong... But you didn't state this from the
beginning and I mistakenly assumed we were in the same reality.
```
Jun 24 2016
```On Thursday, 23 June 2016 at 23:18:03 UTC, H. S. Teoh wrote:
On Thu, Jun 23, 2016 at 11:14:08PM +0000, deadalnix via
Digitalmars-d wrote:
On Thursday, 23 June 2016 at 22:53:59 UTC, H. S. Teoh wrote:
This argument only works for discrete sets.  If n and m are
reals, you'd need a different argument.

For reals, you can use limits/continuation as argument.

The problem with that is that you get two different answers:

lim  x^y = 0
x->0

but:

lim  x^y = 1
y->0

So it's not clear what ought to happen when both x and y
approach 0.

[...]

First: lim  x^x = 1
x->0

Second: Just look at Wikipedia and take the IEEE floating point
standard:
https://en.wikipedia.org/wiki/Exponentiation#Zero_to_the_power_of_zero

Regards mt.
```
Jun 23 2016
```On Friday, 24 June 2016 at 06:58:14 UTC, Martin Tschierschke
wrote:
On Thursday, 23 June 2016 at 23:18:03 UTC, H. S. Teoh wrote:
On Thu, Jun 23, 2016 at 11:14:08PM +0000, deadalnix via
Digitalmars-d wrote:
On Thursday, 23 June 2016 at 22:53:59 UTC, H. S. Teoh wrote:
This argument only works for discrete sets.  If n and m are
reals, you'd need a different argument.

For reals, you can use limits/continuation as argument.

The problem with that is that you get two different answers:

lim  x^y = 0
x->0

but:

lim  x^y = 1
y->0

So it's not clear what ought to happen when both x and y
approach 0.

[...]

First: lim  x^x = 1
x->0

Second: Just look at Wikipedia and take the IEEE floating point
standard:
https://en.wikipedia.org/wiki/Exponentiation#Zero_to_the_power_of_zero

But is even more funky...

NaN^-1 = NaN
NaN^0 = 1.0
NaN^1 = NaN

Inf^-1 = 0.0
Inf^0 = 1.0
Inf^1 = Inf

:-)
```
Jun 24 2016
```On Friday, 24 June 2016 at 09:04:55 UTC, Ola Fosheim Grøstad
wrote:
Inf^-1 = 0.0
Inf^0 = 1.0
Inf^1 = Inf

:-)

Or actually:

NaN^-epsilon = Nan
Nan^0 = 1.0
NaN^epsilon = Nan

Inf^-epsilon = 0.0
Inf^0 = 1.0
Inf^epsilon = Inf

It is rather difficult to argue that this is reasonable...
```
Jun 24 2016
```On 06/24/2016 02:58 AM, Martin Tschierschke wrote:
Second: Just look at Wikipedia and take the IEEE floating point standard:
https://en.wikipedia.org/wiki/Exponentiation#Zero_to_the_power_of_zero

Scrolling down, I see how IEEE prescribes pown(0, 0) to return 1. I
guess that's a good practical argument. -- Andrei
```
Jun 24 2016
```On Friday, 24 June 2016 at 13:51:33 UTC, Andrei Alexandrescu
wrote:
On 06/24/2016 02:58 AM, Martin Tschierschke wrote:
Second: Just look at Wikipedia and take the IEEE floating
point standard:
https://en.wikipedia.org/wiki/Exponentiation#Zero_to_the_power_of_zero

Scrolling down, I see how IEEE prescribes pown(0, 0) to return
1. I guess that's a good practical argument. -- Andrei

Nope, pown requires an exact integer, powr(0.0,0.0) yields NaN.
```
Jun 24 2016
```On Friday, 24 June 2016 at 15:17:52 UTC, Ola Fosheim Grøstad
wrote:
On Friday, 24 June 2016 at 13:51:33 UTC, Andrei Alexandrescu
wrote:
On 06/24/2016 02:58 AM, Martin Tschierschke wrote:
Second: Just look at Wikipedia and take the IEEE floating
point standard:
https://en.wikipedia.org/wiki/Exponentiation#Zero_to_the_power_of_zero

Scrolling down, I see how IEEE prescribes pown(0, 0) to return
1. I guess that's a good practical argument. -- Andrei

Nope, pown requires an exact integer, powr(0.0,0.0) yields NaN.

Actually, the wikipedia article was wrong.

powr(0.0,y) and y<0 => exception
powr(x,y) and x<0 => exception
powr(0.0,0.0) => exception
powr(Inf,0.0) => exception
powr(1,Inf) => exception
powr(x,qNaN) => qNaN
pwor(qNaN,y) => qNaN

Much more sane than pow(x,y)...
```
Jun 24 2016    deadalnix <deadalnix gmail.com> writes:
```On Thursday, 23 June 2016 at 23:18:03 UTC, H. S. Teoh wrote:
On Thu, Jun 23, 2016 at 11:14:08PM +0000, deadalnix via
Digitalmars-d wrote:
On Thursday, 23 June 2016 at 22:53:59 UTC, H. S. Teoh wrote:
This argument only works for discrete sets.  If n and m are
reals, you'd need a different argument.

For reals, you can use limits/continuation as argument.

The problem with that is that you get two different answers:

lim  x^y = 0
x->0

but:

lim  x^y = 1
y->0

So it's not clear what ought to happen when both x and y
approach 0.

The problem is that the 2-variable function f(x,y)=x^y has a
discontinuity at (0,0). So approaching it from some directions
give 1, approaching it from other directions give 0, and it's
not clear why one should choose the value given by one
direction above another.

Mathematicians arbitrarily chose its value to be 1 based on
arguments like the one Timon gave, but it's an arbitrary
choice, not something that the mathematics itself suggest.

T

https://en.wikipedia.org/wiki/Exponentiation#IEEE_floating_point_standard

Most programming language with a power function are implemented
using the IEEE pow function and therefore evaluate 00 as 1. The
later C and C++ standards describe this as the normative
behaviour. The Java standard mandates this behavior. The .NET
Framework method System.Math.Pow also treats 00 as 1.

I understand that the question is open mathematically, but this
provides value.
```
Jun 24 2016
```On 24.06.2016 00:53, H. S. Teoh via Digitalmars-d wrote:
Because 0^^0 = 1, and 1 is representable.

E.g. n^^m counts the number of functions from an m-set to an n-set,
and there is exactly one function from {} to {}.

This argument only works for discrete sets.

No, it works for any cardinals n and m.

If n and m are reals, you'd
need a different argument.

I don't want to argue this at all. x^^0 is an empty product no matter
what set I choose x and 0 from. 0^^0 = 1 is the only reasonable
convention, and this is absolutely painfully obvious from where I stand.
What context are you using 'pow' in that would suggest otherwise?

Also, Andrei's implementation explicitly works on integers anyway.
```
Jun 23 2016
```On Fri, Jun 24, 2016 at 01:33:46AM +0200, Timon Gehr via Digitalmars-d wrote:
On 24.06.2016 00:53, H. S. Teoh via Digitalmars-d wrote:
Because 0^^0 = 1, and 1 is representable.

E.g. n^^m counts the number of functions from an m-set to an n-set,
and there is exactly one function from {} to {}.

This argument only works for discrete sets.

No, it works for any cardinals n and m.

It doesn't.  What is the meaning of m-set and n-set when m and n are
real numbers?  How many elements are in a pi-set?  How many functions
are there from a pi-set to an e-set?  Your "number of functions"
argument only works if m and n are discrete numbers.

If n and m are reals, you'd need a different argument.

I don't want to argue this at all. x^^0 is an empty product no matter
what set I choose x and 0 from.

And 0^^x is a non-empty product when x != 0.  So why should we choose
the limit of x^^0 as opposed to the limit of 0^^x?

0^^0 = 1 is the only reasonable convention, and this is absolutely
painfully obvious from where I stand. What context are you using 'pow'
in that would suggest otherwise?

When computing the limit of x^y as x->0?

Also, Andrei's implementation explicitly works on integers anyway.

Even for integers, the limit of x^y as x->0 is 0.

My point is that the choice is an arbitrary one. It doesn't arise
directly from the mathematics itself.  I understand that 0^0 is chosen
to equal 1 in order for certain theorems to be more "aesthetically
beautiful", just like 0! is chosen to equal 1 because it makes the
definition of factorial "nicer". But it's still an arbitrary choice.

T

--
Those who don't understand Unix are condemned to reinvent it, poorly.
```
Jun 23 2016
```On 24.06.2016 01:58, H. S. Teoh via Digitalmars-d wrote:
On Fri, Jun 24, 2016 at 01:33:46AM +0200, Timon Gehr via Digitalmars-d wrote:
On 24.06.2016 00:53, H. S. Teoh via Digitalmars-d wrote:
Because 0^^0 = 1, and 1 is representable.

E.g. n^^m counts the number of functions from an m-set to an n-set,
and there is exactly one function from {} to {}.

This argument only works for discrete sets.

No, it works for any cardinals n and m.

It doesn't.  What is the meaning of m-set and n-set when m and n are
real numbers?  How many elements are in a pi-set?  How many functions
are there from a pi-set to an e-set?  Your "number of functions"
argument only works if m and n are discrete numbers.
...

Most real numbers are not (identified with) cardinals, so I don't see
how that contradicts what I said, or how what you are saying is the same
thing as what you previously stated.

If n and m are reals, you'd need a different argument.

I don't want to argue this at all. x^^0 is an empty product no matter
what set I choose x and 0 from.

And 0^^x is a non-empty product when x != 0.  So why should we choose
the limit of x^^0 as opposed to the limit of 0^^x?
...

I don't care about any of the limits. ^^ has an essential discontinuity
at (0,0), so the limits don't need to have a bearing on how you define
the value, but if they do, consider that there is only one direction in
which the limit is 0, and an uncountably infinite number of directions
in which the limit is 1.

Have a look at this plot: http://www.wolframalpha.com/input/?i=x%5E-y
Can you even spot the discontinuity? (I can't.)

0^^0 = 1 is the only reasonable convention, and this is absolutely
painfully obvious from where I stand. What context are you using 'pow'
in that would suggest otherwise?

When computing the limit of x^y as x->0?
...

That limit is 1 if y=0 and 0 if y!=0. I assume you mean the limit of 0^y
as y->0. Why is that important? Why should that single odd direction
ruin it for everyone else?

Also, Andrei's implementation explicitly works on integers anyway.

Even for integers, the limit of x^y as x->0 is 0.
...

This is wrong. Again, I assume that you mean 0^y as y->0.
In any case, on integers, you cannot define this limit without giving
the value at (0,0) in the first place, and the limit is whatever value
you gave.

My point is that the choice is an arbitrary one. It doesn't arise
directly from the mathematics itself.  I understand that 0^0 is chosen
to equal 1 in order for certain theorems to be more "aesthetically
beautiful",

Why do you think that is? Again consider my example where a+b is
actually a+b unless a=b=2, in which case it is 5.

Now, the theorem that states that (the original) addition is associative
gets a lot less "aesthetically beautiful". Do you consider the
definition 2+2=4 an 'arbitrary choice'? If you do, then our disagreement
is founded on different ideas of what it means for notation to be
'arbitrary' as opposed to 'well-designed'.

Notation is almost exclusively about aesthetics!

just like 0! is chosen to equal 1 because it makes the
definition of factorial "nicer". But it's still an arbitrary choice.
...

n! = Γ(n+1).
0! = Γ(1) = 1. Consider it arbitrary if you wish.

(The offset by 1 here is IMHO a real example of an unfortunate and
arbitrary choice of notation, but I hope that does not take away from my
real point.)

Anyway, 2+2=4 because this makes the definition of + "nicer". It is not
an arbitrary choice. There is /a reason/ why it is "nicer".
```
Jun 23 2016
```On Fri, Jun 24, 2016 at 02:40:15AM +0200, Timon Gehr via Digitalmars-d wrote:
On 24.06.2016 01:58, H. S. Teoh via Digitalmars-d wrote:
On Fri, Jun 24, 2016 at 01:33:46AM +0200, Timon Gehr via Digitalmars-d wrote:
On 24.06.2016 00:53, H. S. Teoh via Digitalmars-d wrote:
Because 0^^0 = 1, and 1 is representable.

E.g. n^^m counts the number of functions from an m-set to an
n-set, and there is exactly one function from {} to {}.

This argument only works for discrete sets.

No, it works for any cardinals n and m.

It doesn't.  What is the meaning of m-set and n-set when m and n are
real numbers?  How many elements are in a pi-set?  How many
functions are there from a pi-set to an e-set?  Your "number of
functions" argument only works if m and n are discrete numbers.
...

Most real numbers are not (identified with) cardinals, so I don't see
how that contradicts what I said, or how what you are saying is the
same thing as what you previously stated.

You said n^^m counts the *number* of functions from an m-set to an
n-set.  How do you define "number of functions" when m and n are
non-integer?

[...]
I don't care about any of the limits. ^^ has an essential
discontinuity at (0,0), so the limits don't need to have a bearing on
how you define the value, but if they do, consider that there is only
one direction in which the limit is 0, and an uncountably infinite
number of directions in which the limit is 1.

f(0)=g(0)=0, such that the limit as x->0 can be made to approach any
number you wish.

Have a look at this plot: http://www.wolframalpha.com/input/?i=x%5E-y
Can you even spot the discontinuity? (I can't.)

That's because your graph is of the function x^(-y), which is a
completely different beast from the function x^y.  If you look at the
graph of the latter, you can see how the manifold curves around x=0 in
such a way that the curvature becomes extreme around (0,0), a sign of an
inherent discontinuity.

[...]
Why do you think that is? Again consider my example where a+b is actually
a+b unless a=b=2, in which case it is 5.

Your example has no bearing on this discussion at all. + has no
discontinuity around 2, so I don't understand what this obsession with
2+2=5 has to do with x^y at (0,0), which *does* have a discontinuity.
Besides, it's very clear from basic arithmetic what 2+2 means, whereas
the same can't be said for 0^^0.

[...]
just like 0! is chosen to equal 1 because it makes the definition of
factorial "nicer". But it's still an arbitrary choice.
...

n! = Γ(n+1).
0! = Γ(1) = 1. Consider it arbitrary if you wish.

The gamma function is only one of many possible interpolations of the
integer factorial to real numbers. It is the one we like to use because
it has certain nice properties, but it's far from being the only
possible choice.  Again, an arbitrary choice made on aesthetic grounds
rather than something inherently dictated by the concept of factorial.

(The offset by 1 here is IMHO a real example of an unfortunate and
arbitrary choice of notation, but I hope that does not take away from
my real point.)

This "unfortunate and arbitrary" choice was precisely due to the same
idea of aesthetically simplifying the integral that defines the
function. Look what it bought us: an inconvenient off-by-1 argument that
has to be accounted for everywhere the function is used as an
interpolation of factorial.

Defining 0^0=1 in like manner makes certain definitions and use cases
"nicer", but not as nice in other cases. Why not just face the fact that
it's an essential discontinuity that is best left undefined?

Anyway, 2+2=4 because this makes the definition of + "nicer". It is
not an arbitrary choice. There is /a reason/ why it is "nicer".

This is a totally backwards argument. 2+2=4 because that's how counting
in the real world works, not because it happens to correspond to some
abstract sense of aesthetics.  Integer arithmetic came before abstract
algebra, not the other way round.  Ancient people knew that 2+2=4 long
before the concept of associativity, commutativity, function continuity,
etc., were even dreamed of.  All of the latter arose as a *result* of
2+2=4 (and other integer arithmetic properties) via generalizations of
how arithmetic worked. People did not learn how to count because some
philosopher first invented abstract algebra and then decided to define +
as an associative commutative operation continuous in both arguments.

Whereas 0^0 does not reflect real-world counting of any sort, but is a
concept that came about as a generalization of repeated multiplication.
The two (0^0 and 2+2) are not on the same footing at all.

T

--
Let's eat some disquits while we format the biskettes.
```
Jun 23 2016
```On 24.06.2016 08:49, H. S. Teoh via Digitalmars-d wrote:
I don't care about any of the limits. ^^ has an essential
discontinuity at (0,0), so the limits don't need to have a bearing on
how you define the value, but if they do, consider that there is only
one direction in which the limit is 0, and an uncountably infinite
number of directions in which the limit is 1.

f(0)=g(0)=0, such that the limit as x->0 can be made to approach any
number you wish.

Yes. I was talking about straight lines.
```
Jun 24 2016
```On 24.06.2016 08:49, H. S. Teoh via Digitalmars-d wrote:
...How do you define "number of functions" when m and n are
non-integer?
...

I don't. But even when n is an arbitrary real number, I still want empty
products to be 1.

...

Have a look at this plot: http://www.wolframalpha.com/input/?i=x%5E-y
Can you even spot the discontinuity? (I can't.)

That's because your graph is of the function x^(-y), which is a
completely different beast from the function x^y.

It's a mirror image.

If you look at the
graph of the latter,

Then the point (0,0) is hidden.

you can see how the manifold curves around x=0 in
such a way that the curvature becomes extreme around (0,0), a sign of an
inherent discontinuity.
...

Well, there's no question that the discontinuity is there. It's just
that, if you want to expose the discontinuity by taking limits along
some path not intersecting the y axis, you need to choose it in a
somewhat clever way in order not to end up with the value 1. Of course,
this is not really a strong argument for assigning any specific value at
(0,0).

[...]
Why do you think that is? Again consider my example where a+b is actually
a+b unless a=b=2, in which case it is 5.

Your example has no bearing on this discussion at all. ...

It's another example of a notational convention. I was trying to figure
out if/why you consider some well-motivated conventions more arbitrary
than others.

Besides, it's very clear from basic arithmetic what 2+2 means, whereas
the same can't be said for 0^^0.
...

That's where we disagree. Both expressions arise naturally in e.g.
combinatorics.

...

(The offset by 1 here is IMHO a real example of an unfortunate and
arbitrary choice of notation, but I hope that does not take away from
my real point.)

This "unfortunate and arbitrary" choice was precisely due to the same
idea of aesthetically simplifying the integral that defines the
function. ...

n! = ∫dx [0≤x]xⁿ·e⁻ˣ.
Γ(t) = ∫dx [0≤x]xᵗ⁻¹·e⁻ˣ.

One of those is simpler, and it is not Γ.

...
Defining 0^0=1 in like manner makes certain definitions and use cases
"nicer", but not as nice in other cases. Why not just face the fact that
it's an essential discontinuity that is best left undefined?
...

Because you don't actually care about continuity properties of x^y as a
function in (x,y) in the cases when you encounter 0^0 in practice.

I agree that you sometimes don't want to consider (0,0). If you want to
study x^y as a continuous function, just say something to the effect of:
"Consider the continuous map ℝ⁺×ℝ ∋ (x,y) ↦ xʸ ∈ ℝ". This
excludes (0,0)
from consideration in the way you want (it also excludes the case that x
is a negative integer and y is an integer, for example).

There is no good reason to complicate e.g. polynomial and power series
notations by default. Either x or y often (or even, usually) does not
vary continuously (and if y varies, x is usually not 0).

Anyway, 2+2=4 because this makes the definition of + "nicer". It is
not an arbitrary choice. There is /a reason/ why it is "nicer".

This is a totally backwards argument. 2+2=4 because that's how counting
in the real world works,

Yes, and 0^0 = 1 because that is how counting in the real world works.

...

Whereas 0^0 does not reflect real-world counting of any sort,

Yes it does. It counts the number of empty sequences of nothing.

but is a
concept that came about as a generalization of repeated multiplication.

That's one way to think about it, and then you would expect it to
actually generalize repeated multiplication, would you not?

BTW: I found this funny:

http://www.wolframalpha.com/input/?i=product+x+for+i%3D1+to+n

http://www.wolframalpha.com/input/?i=product+0*i+from+i%3D1+to+0
('*i' needed to pass their parser for some reason.)

http://www.wolframalpha.com/input/?i=0%5E0

By treating 0^0 consistently as 1, you never run into this kind of
problem. Doesn't this demonstrate that they are doing it wrong? How
would you design the notation?
```
Jun 24 2016
```On Friday, 24 June 2016 at 21:11:52 UTC, Timon Gehr wrote:
By treating 0^0 consistently as 1, you never run into this kind
of problem. Doesn't this demonstrate that they are doing it
wrong? How would you design the notation?

You really need to look at the context. It is not uncommon that
solvers favour total functions where even division is defined for
"x / 0", for very pragmatic reasons. So even a total function for
power could be useful in some contexts (i.e. considered defined
not only for "0^0", but also "0^-1").

However, IEEE754-2008 is more principled. It separates pow into
power over natural numbers "pown" (where 0^0==1) and power over
inexact reals "powr". This is rather clear from the exception
thrown at "powr(1.0,inf)", as the inexact nature of a "1.0"
floating point could take it towards 0, 1 or inf.

And you cannot really know if inexact reals of "0.0^0.0" should
go towards 0.0 or 1.0. It would make most sense to return the
interval of potential values: [0,1]
```
Jun 24 2016
```Btw, one might take the view that "pown(real x,int y)" is
describing monomials:

https://en.wikipedia.org/wiki/Monomial

Whereas "powr(real x,real y)" is a shorthand for "exp(y *
log(x))".
```
Jun 24 2016
```On Thursday, 23 June 2016 at 23:33:46 UTC, Timon Gehr wrote:
I don't want to argue this at all. x^^0 is an empty product no
matter what set I choose x and 0 from. 0^^0 = 1 is the only
reasonable convention, and this is absolutely painfully obvious
from where I stand. What context are you using 'pow' in that
would suggest otherwise?

You can build a non-analytical flavor of pow on top of the
analytical flavor:
int pow1(int x, int y)
{
if(x==0 && y==0)return 1;
return pow(x,y);
}
```
Jun 24 2016
```On 06/24/2016 10:03 AM, Kagamin wrote:
On Thursday, 23 June 2016 at 23:33:46 UTC, Timon Gehr wrote:
I don't want to argue this at all. x^^0 is an empty product no matter
what set I choose x and 0 from. 0^^0 = 1 is the only reasonable
convention, and this is absolutely painfully obvious from where I
stand. What context are you using 'pow' in that would suggest otherwise?

You can build a non-analytical flavor of pow on top of the analytical
flavor:
int pow1(int x, int y)
{
if(x==0 && y==0)return 1;
return pow(x,y);
}

Apparently that's what powr/pow do? -- Andrei
```
Jun 24 2016    Walter Bright <newshound2 digitalmars.com> writes:
```On 6/23/2016 10:22 AM, Andrei Alexandrescu wrote:
https://dpaste.dzfl.pl/e53acb41885a

/**
*/
int pow(int lhs, uint rhs, ref bool overflow)
{ return powImpl(lhs, rhs, overflow); }
/// ditto
long pow(long lhs, uint rhs, ref bool overflow)
{ return powImpl(lhs, rhs, overflow); }
/// ditto
uint pow(uint lhs, uint rhs, ref bool overflow)
{ return powImpl(lhs, rhs, overflow); }
/// ditto
ulong pow(ulong lhs, uint rhs, ref bool overflow)
{ return powImpl(lhs, rhs, overflow); }

// Inspiration: http://www.stepanovpapers.com/PAM.pdf
private T powImpl(T)(T b, uint e, ref bool overflow)
{
import core.checkedint : muls, mulu;
import std.traits : isUnsigned;
static if (isUnsigned!T) alias mul = mulu;
else alias mul = muls;

if (e <= 1)
{
if (e == 1) return b;
if (b == 0) overflow = true;
return 1;
}

T r = b;
assert(e > 1);
--e;
// Loop invariant: r * (b ^^ e) is the actual result
outer: for (;;)
{
if (e % 2 != 0) goto geeba;
for (;;)
{
b = mul(b, b, overflow);
e /= 2;
continue outer;
geeba:
r = mul(r, b, overflow);
if (e == 1) return r;
}
}
assert(0);
}

unittest
{
import std.stdio;
bool overflow;
foreach (uint i; 0 .. 21)
{
assert(pow(3u, i, overflow) == 3u ^^ i);
assert(!overflow);
}
assert(pow(3u, 21u, overflow) == 3u ^^ 21);
assert(overflow);
}

void main(){}
```
Jun 23 2016