www.digitalmars.com         C & C++   DMDScript  

digitalmars.D - Different results for uniform random number generation between D

reply Joseph Rushton Wakeling <joseph.wakeling webdrake.net> writes:
Hello all,

A little curiosity I noticed when comparing fine-detailed results from 
simulations where I had code in both C++ and D.

Here's a little program that prints out the first 10 results of the Mersenne 
Twister RNG, and then uses the same sequence to generate 10 
uniformly-distributed numbers in [0, 1).

////////////////////////////////////////////////////////////////////////////
import std.random, std.range, std.stdio;

void main()
{
	auto rng = Random(12345);

	foreach(x; takeExactly(rng, 10))
		writeln(x);

	writeln();
	writeln("Min: ", rng.min);
	writeln("Max: ", rng.max);
	writeln();
	
	rng.seed(12345);

	foreach(i; 0..1000)
		writefln("%.64f", uniform!("[)", double, double)(0.0, 1.0, rng));
}
////////////////////////////////////////////////////////////////////////////

And now here's something similar written in C++ and using the Boost RNGs:

////////////////////////////////////////////////////////////////////////////
#include <iostream>
#include <iomanip>
#include <boost/random.hpp>

int main(void)
{
	boost::mt19937 rng(12345);

	for(int i=0;i<10;++i)
		std::cout << rng() << std::endl;
	
	std::cout << std::endl;
	std::cout << "Min: " << rng.min() << std::endl;
	std::cout << "Max: " << rng.max() << std::endl;
	std::cout << std::endl;

	rng.seed(12345);

	for(int i=0;i<10;++i)
		std::cout << std::setprecision(64) << boost::uniform_real<double>(0.0, 
1.0)(rng) << std::endl;
}
////////////////////////////////////////////////////////////////////////////

When I run these on my 64-bit machine I get different results for the uniform 
doubles, which show up round about the 15th or 16th decimal place.

If I replace double with float, I get a similar result, but now with the 
difference starting at about the 7th or 8th decimal place.

I've tried tweaking the internals of std.random's uniform() just to confirm
it's 
not a rounding error down to subtly different arithmetical order; D's results 
remain unchanged.  So I'm curious why I see these differences -- OK, floating 
point is a minefield, but I thought these were hardware-implemented variables 
and operations which I'd expect to match between C++ and D.

OK, it's not the end of the world and these numbers are identical to a high 
degree of accuracy -- but I'm just curious as to why the difference.  It's also 
slightly frustrating to not have precise agreement between the output of 2 
libraries that are so obviously designed to produce an identical API and
feature 
set.

Best wishes,

     -- Joe
Jul 30 2012
next sibling parent reply Nick Sabalausky <SeeWebsiteToContactMe semitwist.com> writes:
On Tue, 31 Jul 2012 00:33:43 +0100
Joseph Rushton Wakeling <joseph.wakeling webdrake.net> wrote:

 Hello all,
 
 A little curiosity I noticed when comparing fine-detailed results
 from simulations where I had code in both C++ and D.
 
 Here's a little program that prints out the first 10 results of the
 Mersenne Twister RNG, and then uses the same sequence to generate 10 
 uniformly-distributed numbers in [0, 1).
 
 ////////////////////////////////////////////////////////////////////////////
 import std.random, std.range, std.stdio;
 
 void main()
 {
 	auto rng = Random(12345);
 
 	foreach(x; takeExactly(rng, 10))
 		writeln(x);
 
 	writeln();
 	writeln("Min: ", rng.min);
 	writeln("Max: ", rng.max);
 	writeln();
 	
 	rng.seed(12345);
 
 	foreach(i; 0..1000)
 		writefln("%.64f", uniform!("[)", double, double)(0.0,
 1.0, rng)); }
 ////////////////////////////////////////////////////////////////////////////
 
 And now here's something similar written in C++ and using the Boost
 RNGs:
 
 ////////////////////////////////////////////////////////////////////////////
 #include <iostream>
 #include <iomanip>
 #include <boost/random.hpp>
 
 int main(void)
 {
 	boost::mt19937 rng(12345);
 
 	for(int i=0;i<10;++i)
 		std::cout << rng() << std::endl;
 	
 	std::cout << std::endl;
 	std::cout << "Min: " << rng.min() << std::endl;
 	std::cout << "Max: " << rng.max() << std::endl;
 	std::cout << std::endl;
 
 	rng.seed(12345);
 
 	for(int i=0;i<10;++i)
 		std::cout << std::setprecision(64) <<
 boost::uniform_real<double>(0.0, 1.0)(rng) << std::endl;
 }
 ////////////////////////////////////////////////////////////////////////////
 
 When I run these on my 64-bit machine I get different results for the
 uniform doubles, which show up round about the 15th or 16th decimal
 place.
 
 If I replace double with float, I get a similar result, but now with
 the difference starting at about the 7th or 8th decimal place.
 
 I've tried tweaking the internals of std.random's uniform() just to
 confirm it's not a rounding error down to subtly different
 arithmetical order; D's results remain unchanged.  So I'm curious why
 I see these differences -- OK, floating point is a minefield, but I
 thought these were hardware-implemented variables and operations
 which I'd expect to match between C++ and D.
 
 OK, it's not the end of the world and these numbers are identical to
 a high degree of accuracy -- but I'm just curious as to why the
 difference.  It's also slightly frustrating to not have precise
 agreement between the output of 2 libraries that are so obviously
 designed to produce an identical API and feature set.
 
 Best wishes,
 
      -- Joe
My (limited) understanding is that it's almost practically impossible to get consistency in x86 floating point unless you're using SSE: http://www.yosefk.com/blog/consistency-how-to-defeat-the-purpose-of-ieee-floating-point.html Maybe the effect you're observing could be related to what he describes?
Jul 30 2012
parent "Era Scarecrow" <rtcvb32 yahoo.com> writes:
On Tuesday, 31 July 2012 at 01:00:38 UTC, Nick Sabalausky wrote:

 My (limited) understanding is that it's almost practically 
 impossible to get consistency in x86 floating point unless 
 you're using SSE:

 http://www.yosefk.com/blog/consistency-how-to-defeat-the-purpose-of-ieee-floating-point.html

 Maybe the effect you're observing could be related to what he 
 describes?
I don't suppose there's an emulated floating type (like is optionally available in the linux kernel) that could be used in place of normal floating point? I wonder if the C++ version is wrong; In walter's article regarding floating point he brought out that few applications in C++ set/reset the register for floating point between operations or between functions (when they change)... Or was that between applications as a whole (And an occasional OS issue)? Is the problem be even worse if you mixed MMX and FPU (Since they use the same registers, just a different mode)?
Jul 31 2012
prev sibling parent FeepingCreature <default_357-line yahoo.de> writes:
Try comparing the mxcsr register. It controls rounding mode when using SSE ops
(even the single-float ones).

Here's a good page that documents all the bits of the mxcsr:
http://softpixel.com/~cwright/programming/simd/sse.php

The x87 equivalent is the control word; see
http://www.website.masmforum.com/tutorials/fptute/fpuchap1.htm#cword
(fstcw/fldcw).
Jul 31 2012