www.digitalmars.com         C & C++   DMDScript  

digitalmars.D - New Lagged Fib. PRNG gen and random2.d

reply "monarch_dodra" <monarchdodra gmail.com> writes:
Hi everyone:

I (finally) ported boost's implementation of the Lagged Fibonacci 
generator to D. I am creating this thread for review.
https://github.com/D-Programming-Language/phobos/pull/1521

The Lagged Fibonacci generator has interesting properties: For 
starters, you can generate integers or floats with it. With 
floats, it generates numbers in a 0 .. 1 range, making it a very 
nice generator for users that want simple uniform floating point 
generation in that range.

Another interesting property, is that one can easily specify bit 
depth and return type. So for example, it is possible to generate 
ubytes in the [0 .. 128] range (7 bit depth), or floats with a 
0.25 step (eg [0, 0.25, 0.5, 0.75]).

In terms of quality, I'm no expert, but supposedly, it's supposed 
to be good, when properly seeded.

Please review the design, and tell me what you think?

--------

One of the design goals with it, is that I think having a new 
PRNG is the perfect opportunity to try out the *long* discussed 
but never introduced PRNG reference-semantics. The 
LaggedFibonacciGenerator is designed the first of a new familly 
of generators: Implemented as a pimpl, and without a "save" 
primitive, it is *impossible* to accidently generate the same 
sequence twice. It does have "dup", for those that *must* have 
back tracking, but it is explicit.

Implemented as a pimpl, its pass-by-value is virtually free 
(MT19937 is an approx. 2KiB).

Seeding *must* be done explicitly, so as to provide the fastest 
possible iteration. There is no "seed()" function, so there is no 
default seed *value* either, which will prevent the newbie 
mistake of "why does random program always output the same random 
sequence", and promote the use of "seed(unpredictableSeed)".

Other than that, there is not much to say... the idea is to try 
to stress test this PRNG, and learn as much as possible, before 
we tackle random2.

--------

I'll copy pasted here the example documentation:

This is example *useage*

///
unittest
{
     //Create a new Generator.
     LaggedFibonacci!ulong lf;

     //It is not yet seeded.
     assert(!lf.isSeeded);

     //Seed with a random seed.
     lf.seed(unpredictableSeed);

     //generate a new array of 10 random numbers.
     ulong[] numbers1 = lf.take(10).array();

     //An existing array can be filled with an algorithm.
     ulong[] numbers2 = new ulong[](10);
     numbers2.fill(lf);

     //Because of the reference semantics, and since there is no 
save,
     //we can rest assured both arrays are different.
     assert(!equal(numbers1, numbers2));
}

These are various flavours of the generator.

///
unittest
{
     //Simple declatation
     LaggedFibonacci!double myFloatingPointGenerator;
     LaggedFibonacci!ulong  myIntegralGenerator;

     //Declaration with specific length
     LaggedFibonacci607!double  mySimpleFloatingPointGenerator;
     LaggedFibonacci44497!ulong myComplexIntegralGenerator;

     //Generate specific type: ubytes in the range [0 .. 256]
     LaggedFibonacci!(ubyte, 8) myByteGenerator;

     //Generate specific type: doubles with a step of 0.125
     LaggedFibonacci!(double, 3) myPieGenerator;

     //Generate with high precision reals
     LaggedFibonacci!(real, 64) myPreciseGenerator;
}

--------

Please (kindly) destroy!
Aug 25 2013
next sibling parent Joseph Rushton Wakeling <joseph.wakeling webdrake.net> writes:
On 25/08/13 18:03, monarch_dodra wrote:
 I (finally) ported boost's implementation of the Lagged Fibonacci generator to
 D. I am creating this thread for review.
 https://github.com/D-Programming-Language/phobos/pull/1521

 The Lagged Fibonacci generator has interesting properties: For starters, you
can
 generate integers or floats with it. With floats, it generates numbers in a 0
..
 1 range, making it a very nice generator for users that want simple uniform
 floating point generation in that range.

 Another interesting property, is that one can easily specify bit depth and
 return type. So for example, it is possible to generate ubytes in the [0 ..
128]
 range (7 bit depth), or floats with a 0.25 step (eg [0, 0.25, 0.5, 0.75]).

 In terms of quality, I'm no expert, but supposedly, it's supposed to be good,
 when properly seeded.

Nice! :-)
 Please review the design, and tell me what you think?

 --------

 One of the design goals with it, is that I think having a new PRNG is the
 perfect opportunity to try out the *long* discussed but never introduced PRNG
 reference-semantics. The LaggedFibonacciGenerator is designed the first of a
new
 familly of generators: Implemented as a pimpl, and without a "save" primitive,
 it is *impossible* to accidently generate the same sequence twice. It does have
 "dup", for those that *must* have back tracking, but it is explicit.

I think that as a general approach it's nice, but we should preferably make generic the method of wrapping a payload. In this way it suffices to have one wrapper struct that works, and then it's trivial for users to define new RNG types by simply defining new payload types, without having to worry about ensuring the reference semantics for each new design. That was the goal of my RandomGenerator wrapper struct: http://forum.dlang.org/thread/mailman.443.1377369357.1719.digitalmars-d puremagic.com http://forum.dlang.org/thread/5218FD04.8040404 webdrake.net ... which we've already discussed off-list. I think it should be trivial to tweak that to be a generic version of what you've done here.
 Implemented as a pimpl, its pass-by-value is virtually free (MT19937 is an
 approx. 2KiB).

 Seeding *must* be done explicitly, so as to provide the fastest possible
 iteration. There is no "seed()" function, so there is no default seed *value*
 either, which will prevent the newbie mistake of "why does random program
always
 output the same random sequence", and promote the use of
"seed(unpredictableSeed)".

I think this is a matter of taste, but I don't think it's bad to force the user to seed things. Note that the existing RNGs go to the extent of actually checking inside popFront() etc. if the RNG has been initialized, and if not, seeding it with a default value. The only thing I'll remark is that these default-seed ideas are quite prevalent in other RNG implementations in other languages. Some of the existing unittests may even be based on that default seeding. So, there may be some expectation of that option being available.
 Other than that, there is not much to say... the idea is to try to stress test
 this PRNG, and learn as much as possible, before we tackle random2.

I think the two of us agree on the general principle that RNGs should be re-done as structs that wrap a reference to a payload. Performance- and design-wise, I'd say that the following things are worth considering: (1) Should the memory management be done via GC (new) as in your implementation, or manually managed? Bear in mind the general desirable principle that Phobos code shouldn't rely on the GC if it doesn't have to. I don't know what impact the use of the GC here can be expected to have on overall performance of a piece of code, and whether it might rule out use of this design in highly performance-conscious use-cases such as games, etc. My own use of RefCounted to ensure manual memory management (no GC) seems to have some scope-related issues, see: http://forum.dlang.org/post/mailman.445.1377370120.1719.digitalmars-d puremagic.com ... and I assume that if payload was allocated manually via malloc instead of new, then the same might be true. (2) Should we provide a generic payload wrapper, or require RNG creators to implement everything manually? I strongly support a generic wrapper, as there is too great a risk of implementation error if we require the wrapping-of-a-reference to be done manually each time. (3) As we discussed off-list, if we do implement a generic wrapper, how should it work? Should it work as per my implementation, alias RandomGenerator!MyEngine MyRNG; or instead as you suggested, as a template mixin, struct MyRNG { mixin RandomGenerator!MyEngine; } I can see the case for the latter as it will result in much more readable type names in error messages. However, I think it has the potential to obscure implementation details in a way that isn't helpful. Consider what happens if we do: alias RandomGenerator!MtEngine19937 Mt19937_64 // ... WRONG!! we forgot to tweak the engine when we copy-pasted, // but at least we'll see in any error messages what type of internal // engine is being used ... compared to: struct Mt19937 { mixin RandomGenerator!MtEngine19937; } // ... Copy-paste failure again, but this time it's obscured and we'll // never find out unless we look at the actual source code. (4) The devil's advocate position -- should we take the simple route to reference-type RNGs by making them final classes? It's trivial to do but to me it feels "un-Phobos-ish" and will also have the problem of requiring a lot more code rewrites on the part of std.random users who want to upgrade to std.random2. That seems like a good opening summary of issues -- destroy. :-) Best wishes, -- Joe
Aug 25 2013
prev sibling next sibling parent "bearophile" <bearophileHUGS lycos.com> writes:
monarch_dodra:

 The Lagged Fibonacci generator has interesting properties: For 
 starters, you can generate integers or floats with it. With 
 floats, it generates numbers in a 0 .. 1 range, making it a 
 very nice generator for users that want simple uniform floating 
 point generation in that range.

What is its relative performance (measured with LDC2 and/or GDC) compared to Xorshift and the Mersenne Twister when it is used to generate integers and when it's used to generate floating point numbers in 0..1 (for the floating point case you have to use something like random01 to have a fair comparison)?
 In terms of quality, I'm no expert, but supposedly, it's 
 supposed to be good, when properly seeded.

Before being merged into Phobos someone has to measure a bit the actual quality of the random numbers it produces. To avoid problems like the ones had with Xorshift. There are suites and tools to measure the quality of random numbers, like a more modern version of the famous diehard. Bye, bearophile
Aug 25 2013
prev sibling next sibling parent Joseph Rushton Wakeling <joseph.wakeling webdrake.net> writes:
On 25/08/13 22:30, bearophile wrote:
 Before being merged into Phobos someone has to measure a bit the actual quality
 of the random numbers it produces. To avoid problems like the ones had with
 Xorshift. There are suites and tools to measure the quality of random numbers,
 like a more modern version of the famous diehard.

Diehard is out of date. There's the Dieharder suite, and there's another by Pierre L'Ecuyer and colleagues, TestU01: http://www.phy.duke.edu/~rgb/General/dieharder.php http://www.iro.umontreal.ca/~simardr/testu01/tu01.html Dieharder is in the repositories of Debian/Ubuntu, and a first approximation of a D random test suite is probably as simple as writing a D program that outputs a sufficient quantity of random numbers, and piping them through to Dieharder. It's on the TODO list but there have been too many other things going on recently. :-P By the way, the Xorshift problem didn't need a complicated test suite. This was the code that showed up the problem: writeln("Let's test the uniformity of random numbers produced with different RNGs."); size_t n = 1000_000; foreach(Rng; PseudoRngTypes) { writeln("Generating ", n, " random numbers in [0, 1) with ", Rng.stringof); auto rng = Rng(unpredictableSeed); auto dist = Distribution(0.0, 100.0, 20); foreach(i; 0 .. n) { dist.add(uniform(0.0, 100.0, rng)); } foreach(bin, h, c; dist.distribution) { writeln("\t", bin, "\t", h, "\t", c); } writeln; } (A few data structures aren't described here, but you get the idea.) The Xorshift problem only arose because the reference code was itself buggy. For something like Lagged Fibonacci there are already good and battle-tested implementations out there (e.g. Boost.Random) and a comparison of results for the same seeds should be sufficient for inclusion.
Aug 25 2013
prev sibling next sibling parent "monarch_dodra" <monarchdodra gmail.com> writes:
On Sunday, 25 August 2013 at 20:30:04 UTC, bearophile wrote:
 Before being merged into Phobos someone has to measure a bit 
 the actual quality of the random numbers it produces. To avoid 
 problems like the ones had with Xorshift. There are suites and 
 tools to measure the quality of random numbers, like a more 
 modern version of the famous diehard.

I can't comment on the quality of the PRNG itself, as I'm no RNG expert myself, but the code *was* ported from boost, and produces the same output. So arguably, we have matching quality. On Sunday, 25 August 2013 at 20:30:04 UTC, bearophile wrote:
 What is its relative performance (measured with LDC2 and/or 
 GDC) compared to Xorshift and the Mersenne Twister when it is 
 used to generate integers and when it's used to generate 
 floating point numbers in 0..1 (for the floating point case you 
 have to use something like random01 to have a fair comparison)?

A valid question. I benched it, and the results are, IMO *very* good. The tests were run using the reference dmd 2.0.63.2, on a win7_64. When generating uint's with 32 bits of randomness, I'm on par with MinstdRand0 in terms of speed, which is pretty good I think. It is faster than MT, but still slower than XS. However, it can *also* generate ulongs with 64 bits of randomness, but doing so only takes roughly 40% more time. Which is great? I'd have to bench on a linux 64 with a dmd64 compiler: I think a true 64 bit implementation should have even less overhead, making LF *faster* (!) than MinstdRand0. Another interesting bench was generating doubles: Lagged Fibonacci is able to natively generate doubles. As such, I benched it against the other generators, after dividing the output by Generator.max. This should be of interest to you... The results are also good: I'm getting *slightly* slower times than MinstdRand0 (roughly 10% slower), but still faster than MT. HOWEVER, PRNG/PRNG.max will generate doubles with only 32 bits of significant fractional part. LF will generate doubles with either 48 bits is significance (by default), or the full 52 bits. It is *also* able to generate reals with the full 64 bits of significant data, but this is somewhat slower. Awesome! -------- That's my biased review/comment on the numbers. I'll post them here, so you can interpret yourself.
Aug 26 2013
prev sibling next sibling parent "monarch_dodra" <monarchdodra gmail.com> writes:
On Monday, 26 August 2013 at 09:24:57 UTC, monarch_dodra wrote:
 That's my biased review/comment on the numbers. I'll post them 
 here, so you can interpret yourself.

Units are ms. 1_000_000_000 iterations /*Generate simple uints*/ Time for LinearCongruentialEngine!(uint, 16807, 0, 2147483647): 7487 Time for LinearCongruentialEngine!(uint, 48271, 0, 2147483647): 7092 Time for MersenneTwisterEngine!(uint, 32, 624, 397, 31, 2567483615u, 11, 7, 2636928640u, 15, 4022730752u, 18): 13678 Time for XorshiftEngine!(uint, 32, 13, 17, 5) : 4266 Time for XorshiftEngine!(uint, 64, 10, 13, 10) : 4694 Time for XorshiftEngine!(uint, 96, 10, 5, 26) : 5697 Time for XorshiftEngine!(uint, 128, 11, 8, 19) : 5675 Time for XorshiftEngine!(uint, 160, 2, 1, 4) : 6401 Time for XorshiftEngine!(uint, 192, 2, 1, 4) : 6754 Time for LaggedFibonacciEngine!(uint, 32u, 607, 273) : 7630 Time for LaggedFibonacciEngine!(uint, 32u, 1279, 418) : 7196 Time for LaggedFibonacciEngine!(uint, 32u, 2281, 1252) : 7411 Time for LaggedFibonacciEngine!(uint, 32u, 3217, 576) : 7031 Time for LaggedFibonacciEngine!(uint, 32u, 4423, 2098) : 7402 Time for LaggedFibonacciEngine!(uint, 32u, 9689, 5502) : 6999 Time for LaggedFibonacciEngine!(uint, 32u, 19937, 9842) : 7478 Time for LaggedFibonacciEngine!(uint, 32u, 23209, 13470) : 7064 Time for LaggedFibonacciEngine!(uint, 32u, 44497, 21034) : 7528 /*Note: These generate longer ulong types*/ Time for LaggedFibonacciEngine!(ulong, 48u, 607, 273) : 11860 Time for LaggedFibonacciEngine!(ulong, 48u, 1279, 418) : 11675 Time for LaggedFibonacciEngine!(ulong, 48u, 2281, 1252) : 11622 Time for LaggedFibonacciEngine!(ulong, 48u, 3217, 576) : 11621 Time for LaggedFibonacciEngine!(ulong, 48u, 4423, 2098) : 11699 Time for LaggedFibonacciEngine!(ulong, 48u, 9689, 5502) : 11898 Time for LaggedFibonacciEngine!(ulong, 48u, 19937, 9842) : 11883 Time for LaggedFibonacciEngine!(ulong, 48u, 23209, 13470) : 11870 Time for LaggedFibonacciEngine!(ulong, 48u, 44497, 21034) : 11908 Time for LaggedFibonacciEngine!(ulong, 64u, 607, 273) : 11745 Time for LaggedFibonacciEngine!(ulong, 64u, 1279, 418) : 11838 Time for LaggedFibonacciEngine!(ulong, 64u, 2281, 1252) : 11656 Time for LaggedFibonacciEngine!(ulong, 64u, 3217, 576) : 11709 Time for LaggedFibonacciEngine!(ulong, 64u, 4423, 2098) : 11768 Time for LaggedFibonacciEngine!(ulong, 64u, 9689, 5502) : 11863 Time for LaggedFibonacciEngine!(ulong, 64u, 19937, 9842) : 11923 Time for LaggedFibonacciEngine!(ulong, 64u, 23209, 13470) : 11878 Time for LaggedFibonacciEngine!(ulong, 64u, 44497, 21034) : 11905 /* Generating doubles */ /* via g.front/g.max */ Time for LinearCongruentialEngine!(uint, 16807, 0, 2147483647): 12093 Time for LinearCongruentialEngine!(uint, 48271, 0, 2147483647): 12057 Time for MersenneTwisterEngine!(uint, 32, 624, 397, 31, 2567483615u, 11, 7, 2636928640u, 15, 4022730752u, 18): 19510 Time for XorshiftEngine!(uint, 32, 13, 17, 5) : 10616 Time for XorshiftEngine!(uint, 64, 10, 13, 10) : 10634 Time for XorshiftEngine!(uint, 96, 10, 5, 26) : 10978 Time for XorshiftEngine!(uint, 128, 11, 8, 19) : 11715 Time for XorshiftEngine!(uint, 160, 2, 1, 4) : 12052 Time for XorshiftEngine!(uint, 192, 2, 1, 4) : 12017 Time for LaggedFibonacciEngine!(uint, 32u, 607, 273) : 12449 Time for LaggedFibonacciEngine!(uint, 32u, 1279, 418) : 12382 Time for LaggedFibonacciEngine!(uint, 32u, 2281, 1252) : 12334 Time for LaggedFibonacciEngine!(uint, 32u, 3217, 576) : 12321 Time for LaggedFibonacciEngine!(uint, 32u, 4423, 2098) : 12293 Time for LaggedFibonacciEngine!(uint, 32u, 9689, 5502) : 12482 Time for LaggedFibonacciEngine!(uint, 32u, 19937, 9842) : 12596 Time for LaggedFibonacciEngine!(uint, 32u, 23209, 13470) : 12605 Time for LaggedFibonacciEngine!(uint, 32u, 44497, 21034) : 12727 Time for LaggedFibonacciEngine!(ulong, 48u, 607, 273) : 17727 Time for LaggedFibonacciEngine!(ulong, 48u, 1279, 418) : 17626 Time for LaggedFibonacciEngine!(ulong, 48u, 2281, 1252) : 17527 Time for LaggedFibonacciEngine!(ulong, 48u, 3217, 576) : 17549 Time for LaggedFibonacciEngine!(ulong, 48u, 4423, 2098) : 19146 Time for LaggedFibonacciEngine!(ulong, 48u, 9689, 5502) : 18044 Time for LaggedFibonacciEngine!(ulong, 48u, 19937, 9842) : 17612 Time for LaggedFibonacciEngine!(ulong, 48u, 23209, 13470) : 17810 Time for LaggedFibonacciEngine!(ulong, 48u, 44497, 21034) : 17855 Time for LaggedFibonacciEngine!(ulong, 64u, 607, 273) : 17736 Time for LaggedFibonacciEngine!(ulong, 64u, 1279, 418) : 17630 Time for LaggedFibonacciEngine!(ulong, 64u, 2281, 1252) : 18671 Time for LaggedFibonacciEngine!(ulong, 64u, 3217, 576) : 17775 Time for LaggedFibonacciEngine!(ulong, 64u, 4423, 2098) : 17773 Time for LaggedFibonacciEngine!(ulong, 64u, 9689, 5502) : 17905 Time for LaggedFibonacciEngine!(ulong, 64u, 19937, 9842) : 17887 Time for LaggedFibonacciEngine!(ulong, 64u, 23209, 13470) : 17910 Time for LaggedFibonacciEngine!(ulong, 64u, 44497, 21034) : 17976 /* Using native doubles */ Time for LaggedFibonacciEngine!(double, 32u, 607, 273) : 14741 Time for LaggedFibonacciEngine!(double, 32u, 1279, 418) : 14475 Time for LaggedFibonacciEngine!(double, 32u, 2281, 1252) : 14829 Time for LaggedFibonacciEngine!(double, 32u, 3217, 576) : 14815 Time for LaggedFibonacciEngine!(double, 32u, 4423, 2098) : 14817 Time for LaggedFibonacciEngine!(double, 32u, 9689, 5502) : 14698 Time for LaggedFibonacciEngine!(double, 32u, 19937, 9842) : 14882 Time for LaggedFibonacciEngine!(double, 32u, 23209, 13470): 14819 Time for LaggedFibonacciEngine!(double, 32u, 44497, 21034): 14924 /* Native doubles, but higher precision */ Time for LaggedFibonacciEngine!(double, 48u, 607, 273) : 14657 Time for LaggedFibonacciEngine!(double, 48u, 1279, 418) : 14577 Time for LaggedFibonacciEngine!(double, 48u, 2281, 1252) : 14652 Time for LaggedFibonacciEngine!(double, 48u, 3217, 576) : 14464 Time for LaggedFibonacciEngine!(double, 48u, 4423, 2098) : 14619 Time for LaggedFibonacciEngine!(double, 48u, 9689, 5502) : 15168 Time for LaggedFibonacciEngine!(double, 48u, 19937, 9842) : 14904 Time for LaggedFibonacciEngine!(double, 48u, 23209, 13470): 15016 Time for LaggedFibonacciEngine!(double, 48u, 44497, 21034): 15083 Time for LaggedFibonacciEngine!(double, 64u, 607, 273) : 14807 Time for LaggedFibonacciEngine!(double, 64u, 1279, 418) : 14639 Time for LaggedFibonacciEngine!(double, 64u, 2281, 1252) : 14582 Time for LaggedFibonacciEngine!(double, 64u, 3217, 576) : 14587 Time for LaggedFibonacciEngine!(double, 64u, 4423, 2098) : 14794 Time for LaggedFibonacciEngine!(double, 64u, 9689, 5502) : 14893 Time for LaggedFibonacciEngine!(double, 64u, 19937, 9842) : 14802 Time for LaggedFibonacciEngine!(double, 64u, 23209, 13470): 15077 Time for LaggedFibonacciEngine!(double, 64u, 44497, 21034): 15115 Time for LaggedFibonacciEngine!(real, 64u, 607, 273) : 19308 Time for LaggedFibonacciEngine!(real, 64u, 1279, 418) : 19249 Time for LaggedFibonacciEngine!(real, 64u, 2281, 1252) : 19176 Time for LaggedFibonacciEngine!(real, 64u, 3217, 576) : 19128 Time for LaggedFibonacciEngine!(real, 64u, 4423, 2098) : 19878 Time for LaggedFibonacciEngine!(real, 64u, 9689, 5502) : 19221 Time for LaggedFibonacciEngine!(real, 64u, 19937, 9842) : 19211 Time for LaggedFibonacciEngine!(real, 64u, 23209, 13470) : 19241 Time for LaggedFibonacciEngine!(real, 64u, 44497, 21034) : 19268
Aug 26 2013
prev sibling next sibling parent Joseph Rushton Wakeling <joseph.wakeling webdrake.net> writes:
On 26/08/13 11:24, monarch_dodra wrote:
 However, it can *also* generate ulongs with 64 bits of randomness, but doing so
 only takes roughly 40% more time. Which is great? I'd have to bench on a linux
 64 with a dmd64 compiler: I think a true 64 bit implementation should have even
 less overhead, making LF *faster* (!) than MinstdRand0.

I'll try and benchmark that myself. Boost also defines a 64-bit Mersenne Twister which we really should have in Phobos; it'll require some tweaks to the MersenneTwisterEngine design (some more template parameters).
Aug 26 2013
prev sibling next sibling parent Joseph Rushton Wakeling <joseph.wakeling webdrake.net> writes:
On 26/08/13 11:35, monarch_dodra wrote:
 On Monday, 26 August 2013 at 09:24:57 UTC, monarch_dodra wrote:
 That's my biased review/comment on the numbers. I'll post them here, so you
 can interpret yourself.

Units are ms. 1_000_000_000 iterations

Cool -- so, you're getting faster performance than Mersenne Twister for higher-precision/wider range numbers. That's not the same as statistical quality of course. There are some general concerns about Lagged Fibonacci -- see: https://en.wikipedia.org/wiki/Lagged_Fibonacci_generator I really must implement Mt19937_64 for comparison, and get on with creating something that tests D random numbers against Dieharder.
Aug 26 2013
prev sibling next sibling parent "monarch_dodra" <monarchdodra gmail.com> writes:
On Monday, 26 August 2013 at 09:37:11 UTC, Joseph Rushton 
Wakeling wrote:
 On 26/08/13 11:24, monarch_dodra wrote:
 However, it can *also* generate ulongs with 64 bits of 
 randomness, but doing so
 only takes roughly 40% more time. Which is great? I'd have to 
 bench on a linux
 64 with a dmd64 compiler: I think a true 64 bit implementation 
 should have even
 less overhead, making LF *faster* (!) than MinstdRand0.

I'll try and benchmark that myself. Boost also defines a 64-bit Mersenne Twister which we really should have in Phobos; it'll require some tweaks to the MersenneTwisterEngine design (some more template parameters).

FYI, I tweaked my algo a bit, as the benches revealed an issue. Please wait for a new commit before testing my code.
Aug 26 2013
prev sibling next sibling parent "monarch_dodra" <monarchdodra gmail.com> writes:
On Sunday, 25 August 2013 at 16:58:12 UTC, Joseph Rushton 
Wakeling wrote:
 Nice! :-)

Thanks ;)
 I think that as a general approach it's nice, but we should 
 preferably make generic the method of wrapping a payload.

To be honest, I think that, for now, this is an "implementation detail". All our PRNG's are created with an explicit "value type", that can easily be made the "Payload", no matter what we do.
 I think this is a matter of taste, but I don't think it's bad 
 to force the user to seed things.  Note that the existing RNGs 
 go to the extent of actually checking inside popFront() etc. if 
 the RNG has been initialized, and if not, seeding it with a 
 default value.

 The only thing I'll remark is that these default-seed ideas are 
 quite prevalent in other RNG implementations in other 
 languages.  Some of the existing unittests may even be based on 
 that default seeding.  So, there may be some expectation of 
 that option being available.

An "option" I had dabbled in was making the PRNG proper a "Container", which you can *slice* to obtain a high performance range. EG: //---- auto prng = Random(); //Creates a PRNG. auto first = prng.front; //lazilly seeds prng.popFront(); //Checks seeded auto second = prng.front; //Cheks seeded again... auto fastPRNG = prng[]; //Lazilly seeds, then returns a type *guaranteed* seeded. auto third = fastPRNG.front; //No check of isSeeded. //---- The initial reason I played around with this, is that it was an option to "solve" the reference problem: Make the PRNG's themselves value types, and iterate using reference type *slices*. The end result (for "std.random1") has a couple *major* drawbacks though, IMO: 1. Opt-*in*: Only those aware will use it. The rest of us mortals will still get hit by the bugs. 2. Easily escapes references to locals. 3. As always, if we introduce std.random2, it'll be that much more to clean-up. Still, I thought I'd share my ideas, even if they aren't the best solutions. It *is* an option for the (don't)auto-seed problem. One more thing. One of the ways I see it (specifically for LF): We can add auto-seed later if we decide it was a bad idea to not have it. The opposite isn't possible.
 I think the two of us agree on the general principle that RNGs 
 should be re-done as structs that wrap a reference to a 
 payload.  Performance- and design-wise, I'd say that the 
 following things are worth considering:

   (1) Should the memory management be done via GC (new) as in 
 your
       implementation, or manually managed?  Bear in mind the 
 general
       desirable principle that Phobos code shouldn't rely on 
 the GC if it
       doesn't have to.

       I don't know what impact the use of the GC here can be 
 expected to have
       on overall performance of a piece of code, and whether it 
 might rule out
       use of this design in highly performance-conscious 
 use-cases such as
       games, etc.

       My own use of RefCounted to ensure manual memory 
 management (no GC)
       seems to have some scope-related issues, see:

 http://forum.dlang.org/post/mailman.445.1377370120.1719.digitalmars-d puremagic.com

       ... and I assume that if payload was allocated manually 
 via malloc
       instead of new, then the same might be true.

Another option I had thought of, was to make the value-types Payloads as public, with *massive* "do not use signs". *We* then provide simple and generic (and pure/nothrow) GC-based wrappers. From there, if, for whatever reason, a user needs malloc allocation, or heck, static allocation, he still has an "underhand" access to the payload implementation, but lifecycle management remains *his* responsibility. I think: Simple and safe by default, with the possibility for extension.
   (2) Should we provide a generic payload wrapper, or require 
 RNG creators
       to implement everything manually?  I strongly support a 
 generic wrapper,
       as there is too great a risk of implementation error if 
 we require
       the wrapping-of-a-reference to be done manually each time.

   (3) As we discussed off-list, if we do implement a generic 
 wrapper, how
       should it work?  Should it work as per my implementation,

           alias RandomGenerator!MyEngine MyRNG;

       or instead as you suggested, as a template mixin,

           struct MyRNG
           {
               mixin RandomGenerator!MyEngine;
           }

       I can see the case for the latter as it will result in 
 much more readable
       type names in error messages.  However, I think it has 
 the potential to
       obscure implementation details in a way that isn't 
 helpful.  Consider what
       happens if we do:

           alias RandomGenerator!MtEngine19937 Mt19937_64
           // ... WRONG!! we forgot to tweak the engine when we 
 copy-pasted,
           // but at least we'll see in any error messages what 
 type of internal
           // engine is being used

       ... compared to:

           struct Mt19937
           {
               mixin RandomGenerator!MtEngine19937;
           }
           // ... Copy-paste failure again, but this time it's 
 obscured and we'll
           // never find out unless we look at the actual source 
 code.

Again, for now, I think this is implementation detail. That said, I don't buy much into the typo argument. In particular, this is something that gets copy pasted only once, so we should be relatively safe. One of the arguments in favor of "internal" mixins is that of extension: For example, laggedFib has a very real reason to implement popFrontN, as it can potentially pop thousands of elements at once in o(1). "Internal" mixin makes it easy to extend a PRNG's capabilities past that of the "lowest common denominator". With a "alias Random = RandomGenerator!RandomPayload", you are really limited to whatever RandomGenerator wrapped.
   (4) The devil's advocate position -- should we take the 
 simple route to
       reference-type RNGs by making them final classes?  It's 
 trivial to do but
       to me it feels "un-Phobos-ish" and will also have the 
 problem of requiring
       a lot more code rewrites on the part of std.random users 
 who want to
       upgrade to std.random2.

 That seems like a good opening summary of issues -- destroy. :-)

 Best wishes,

     -- Joe

Honestly, it might just be the simplest thing to do. For one, it would elegantly solve the "must be seeded" issue (allocation is initialization). It *guarantees* Reference semantics. Finally, the (supposed) overhead should be inexistant compared to the compelxity of a PRNG. The option of allowing "public Payload" definitions could still leave an open door for those that need PRNG's, but don't use the GC (think vidjagames).
Aug 26 2013
prev sibling next sibling parent Joseph Rushton Wakeling <joseph.wakeling webdrake.net> writes:
On 26/08/13 17:26, monarch_dodra wrote:
 One more thing. One of the ways I see it (specifically for LF): We can add
 auto-seed later if we decide it was a bad idea to not have it. The opposite
 isn't possible.

Yes. I feel similar about the question of whether or not pseudo-random RNGs should have .save methods or not. I can see use-cases, but better to avoid at first (while as you have suggested implementing .dup) and then, if it turns out that was a bad idea, add them.
 Another option I had thought of, was to make the value-types Payloads as
public,
 with *massive* "do not use signs". *We* then provide simple and generic (and
 pure/nothrow) GC-based wrappers.

Related to this, I was thinking about how my RandomGenerator implements the isUniformRandom property (it aliases to the corresponding property of the underlying random engine/payload). Now, if you suppose that the payloads _don't_ define isUniformRandom but define, say, isRandomPayload or isRandomEngine, and then in the RandomGenerator wrapper you have, alias isUniformRandom = Payload.isRandomPayload; ... then the isUniformRNG check of other functions WON'T WORK with the raw Payloads. So you have a static safety check in place that will plain stop users doing stupid stuff. That is beneficial, because it means that a user can for example make use of the Payload definition to define a new RNG of existing type but with different configuration (e.g. Lagged Fib with different parameter values).
  From there, if, for whatever reason, a user needs malloc allocation, or heck,
 static allocation, he still has an "underhand" access to the payload
 implementation, but lifecycle management remains *his* responsibility.

 I think: Simple and safe by default, with the possibility for extension.

Yes, if we handle isUniformRandom like this.
 Again, for now, I think this is implementation detail. That said, I don't buy
 much into the typo argument. In particular, this is something that gets copy
 pasted only once, so we should be relatively safe.

 One of the arguments in favor of "internal" mixins is that of extension: For
 example, laggedFib has a very real reason to implement popFrontN, as it can
 potentially pop thousands of elements at once in o(1). "Internal" mixin makes
it
 easy to extend a PRNG's capabilities past that of the "lowest common
 denominator". With a "alias Random = RandomGenerator!RandomPayload", you are
 really limited to whatever RandomGenerator wrapped.

Fair enough, but I think there are other ways of handling that, e.g. via opDispatch. We ought to be able to make sure that _all_ public methods and properties of the payload get forwarded. And -- devil's advocate -- how many different extensions are there likely to be in practice? Is it not the case that we can anticipate them and make sure they are in the wrapper? How many operations can one reasonably expect from a RNG?
 Honestly, it might just be the simplest thing to do. For one, it would
elegantly
 solve the "must be seeded" issue (allocation is initialization). It
*guarantees*
 Reference semantics. Finally, the (supposed) overhead should be inexistant
 compared to the compelxity of a PRNG.

 The option of allowing "public Payload" definitions could still leave an open
 door for those that need PRNG's, but don't use the GC (think vidjagames).

I would quite like to have Andrei and Walter's input on the struct vs. class aspect, and Phobos design style. One thought about "public payload" and wrappers: it ought to be possible to design things so that a RandomGenerator wrapper can be initialized by passing a pointer-to-payload instead of a seed, which obviates the need for internal allocation. And we can use that approach for rndGen() so that the default RNG instance doesn't require GC.
Aug 26 2013
prev sibling next sibling parent Joseph Rushton Wakeling <joseph.wakeling webdrake.net> writes:
On 26/08/13 17:26, monarch_dodra wrote:
 One more thing. One of the ways I see it (specifically for LF): We can add
 auto-seed later if we decide it was a bad idea to not have it. The opposite
 isn't possible.

One note on seeding. As it stands, some RNGs do have a valid default configuration _without_ seed() being called -- see e.g. Xorshift.
Aug 26 2013
prev sibling next sibling parent "monarch_dodra" <monarchdodra gmail.com> writes:
On Monday, 26 August 2013 at 21:20:25 UTC, Joseph Rushton
Wakeling wrote:
 On 26/08/13 17:26, monarch_dodra wrote:
 One more thing. One of the ways I see it (specifically for 
 LF): We can add
 auto-seed later if we decide it was a bad idea to not have it. 
 The opposite
 isn't possible.

One note on seeding. As it stands, some RNGs do have a valid default configuration _without_ seed() being called -- see e.g. Xorshift.

What bothers me about these is that they have a fixed seed value, which (IMO) is even worst than default seeding to "unpredicatableSeed" C's "rand" also "defaults seed". I've "tutored" C++ on codeguru for years, and years in and years out, I'd get asked "why does my random program keep creating the same output". This behavior surprises people. A PRNG should either error-out on no-seed, or be run-time unpredictably seeded. The middle ground just gives you the worst of both...
Aug 26 2013
prev sibling next sibling parent "monarch_dodra" <monarchdodra gmail.com> writes:
On Monday, 26 August 2013 at 09:47:50 UTC, monarch_dodra wrote:
 FYI, I tweaked my algo a bit, as the benches revealed an issue. 
 Please wait for a new commit before testing my code.

I tweaked the algo a bit. It now runs faster when the amount of bits is the same as the size_t. I also renewed my benches, and did a linux dmd and gdc run. The good news is the windows ulong|64 bit version is faster than before. The *great* news is that on linux, integral generation is *fast*. It's the fastest PRNG of all, faster than XOS and LC. *AND* it's generating 64 bit ulongs (!). In terms of performance, One could say it *creams* the competition :D The BAD news though, is that it would appear the floating point generation on linux is *bad*. As in, so bad, it's faster to just generate integrals and divide. Arguably, I think the "native" double generation might be of slightly higher quality, but I don't think it is worth the cost. Here are the new benches!!! //============================================= DMD32 2.063.2 on W764 Integrals Time for LinearCongruentialEngine!(uint, 16807, 0, 214...): 8168 Time for MersenneTwisterEngine!(uint, 32, 624, 397, 31...): 14185 Time for LaggedFibonacciEngine!(uint, 32u, 607, 273) : 7079 Time for LaggedFibonacciEngine!(ulong, 48u, 607, 273) : 11869 Time for LaggedFibonacciEngine!(ulong, 64u, 607, 273) : 9677 Floating Time for LinearCongruentialEngine!(uint, 16807, 0, 214...): 12151 Time for MersenneTwisterEngine!(uint, 32, 624, 397, 31...): 19497 Time for XorshiftEngine!(uint, 32, 13, 17, 5) : 10476 Time for LaggedFibonacciEngine!(uint, 32u, 607, 273) : 11117 Time for LaggedFibonacciEngine!(ulong, 48u, 607, 273) : 17415 Time for LaggedFibonacciEngine!(ulong, 64u, 607, 273) : 15209 Time for LaggedFibonacciEngine!(double, 32u, 607, 273) : 14882 Time for LaggedFibonacciEngine!(double, 48u, 607, 273) : 14874 Time for LaggedFibonacciEngine!(real, 64u, 607, 273) : 19138 //============================================= Kubuntu 13.04-64 DMD64 HEAD Integrals Time for LinearCongruentialEngine!(uint, 16807, 0, 2147...): 6942 Time for MersenneTwisterEngine!(uint, 32, 624, 397, 31,...): 13352 Time for XorshiftEngine!(uint, 32, 13, 17, 15) : 6267 Time for LaggedFibonacciEngine!(uint, 32LU, 607, 273) : 4888 Time for LaggedFibonacciEngine!(ulong, 48LU, 607, 273) : 10401 Time for LaggedFibonacciEngine!(ulong, 64LU, 607, 273) : 6401 Floating Time for LinearCongruentialEngine!(uint, 16807, 0, 2147...): 8200 Time for MersenneTwisterEngine!(uint, 32, 624, 397, 31, ..): 15357 Time for XorshiftEngine!(uint, 32, 13, 17, 15) : 7851 Time for LaggedFibonacciEngine!(uint, 32LU, 607, 273) : 5399 Time for LaggedFibonacciEngine!(ulong, 48LU, 607, 273) : 11030 Time for LaggedFibonacciEngine!(ulong, 64LU, 607, 273) : 6927 Time for LaggedFibonacciEngine!(double, 32LU, 607, 273) : 20508 Time for LaggedFibonacciEngine!(double, 48LU, 607, 273) : 20832 Time for LaggedFibonacciEngine!(real, 64LU, 607, 273) : 34674 //============================================= Kubuntu 13.04-64 gdc unknown version Integrals Time for LinearCongruentialEngine!(uint,16807,0,2147483...): 8397 Time for MersenneTwisterEngine!(uint,32,624,397,31,-172...): 13976 Time for XorshiftEngine!(uint,32,13,17,5) : 6135 Time for LaggedFibonacciEngine!(uint,32,607,273) : 5165 Time for LaggedFibonacciEngine!(ulong,48,607,273) : 8582 Time for LaggedFibonacciEngine!(ulong,64,607,273) : 5250 Floating Time for LinearCongruentialEngine!(uint,16807,0,2147483...): 8499 Time for MersenneTwisterEngine!(uint,32,624,397,31,-172...): 14523 Time for XorshiftEngine!(uint,32,13,17,5) : 6194 Time for LaggedFibonacciEngine!(uint,32,607,273) : 5523 Time for LaggedFibonacciEngine!(ulong,48,607,273) : 9239 Time for LaggedFibonacciEngine!(ulong,64,607,273) : 5498 Time for LaggedFibonacciEngine!(double,32,607,273) : 16733 Time for LaggedFibonacciEngine!(double,48,607,273) : 16704 Time for LaggedFibonacciEngine!(real,64,607,273) : 29949
Aug 27 2013
prev sibling next sibling parent Joseph Rushton Wakeling <joseph.wakeling webdrake.net> writes:
On 27/08/13 20:41, monarch_dodra wrote:
 The BAD news though, is that it would appear the floating point generation on
 linux is *bad*. As in, so bad, it's faster to just generate integrals and
 divide. Arguably, I think the "native" double generation might be of slightly
 higher quality, but I don't think it is worth the cost.

"Bad" is a relative term. It might be slower, but you might (I don't know) be getting something that has better statistical quality. So, what you might prefer to use depends on your use-case.
Aug 28 2013
prev sibling next sibling parent Joseph Rushton Wakeling <joseph.wakeling webdrake.net> writes:
On 25/08/13 18:58, Joseph Rushton Wakeling wrote:
    (2) Should we provide a generic payload wrapper, or require RNG creators
        to implement everything manually?  I strongly support a generic wrapper,
        as there is too great a risk of implementation error if we require
        the wrapping-of-a-reference to be done manually each time.

A few issues I ran into when trying to create such a wrapper: * Function properties are not necessarily (easily) consistent between different RNGs. For example, it makes sense that popFront() should be safe pure nothrow, but in the Linear Congruential generator there are casts which mean that it can't be safe. Now, we _could_ go with the lowest common denominator, but that's not a very nice state of affairs. Is there any way for a wrapper to effectively inherit the properties of the functions it's wrapping? * disable this() doesn't seem appropriate for all RNGs, e.g. Xorshift. In fact, for _all_ RNGs you probably want to have a default initialization. So, in this case, classes rather than structs start to look appealing.
Aug 28 2013
prev sibling next sibling parent "monarch_dodra" <monarchdodra gmail.com> writes:
On Wednesday, 28 August 2013 at 17:57:13 UTC, Joseph Rushton 
Wakeling wrote:
 On 25/08/13 18:58, Joseph Rushton Wakeling wrote:
   (2) Should we provide a generic payload wrapper, or require 
 RNG creators
       to implement everything manually?  I strongly support a 
 generic wrapper,
       as there is too great a risk of implementation error if 
 we require
       the wrapping-of-a-reference to be done manually each 
 time.

A few issues I ran into when trying to create such a wrapper: * Function properties are not necessarily (easily) consistent between different RNGs. For example, it makes sense that popFront() should be safe pure nothrow, but in the Linear Congruential generator there are casts which mean that it can't be safe. Now, we _could_ go with the lowest common denominator, but that's not a very nice state of affairs. Is there any way for a wrapper to effectively inherit the properties of the functions it's wrapping? * disable this() doesn't seem appropriate for all RNGs, e.g. Xorshift. In fact, for _all_ RNGs you probably want to have a default initialization. So, in this case, classes rather than structs start to look appealing.

I agree that classes over structs is appealing, but I still think that having "semi-Public payload implementations" is important. There are a lot of devs out there that don't use the GC, in particular video games. It would be nice for them to not have to re-write the wheel. With that said, even with classes, we'd need an easy way to create the classes from the payloads.
Aug 28 2013
prev sibling next sibling parent "Joseph Rushton Wakeling" <joseph.wakeling webdrake.net> writes:
On Wednesday, 28 August 2013 at 18:43:53 UTC, monarch_dodra wrote:
 With that said, even with classes, we'd need an easy way to 
 create the classes from the payloads.

Don't understand. With classes you don't need payloads. Just converting existing RNG structs to final classes is sufficient.
Aug 28 2013
prev sibling next sibling parent "monarch_dodra" <monarchdodra gmail.com> writes:
On Thursday, 29 August 2013 at 00:04:08 UTC, Joseph Rushton 
Wakeling wrote:
 On Wednesday, 28 August 2013 at 18:43:53 UTC, monarch_dodra 
 wrote:
 With that said, even with classes, we'd need an easy way to 
 create the classes from the payloads.

Don't understand. With classes you don't need payloads. Just converting existing RNG structs to final classes is sufficient.

Yes, that'd be the idea, but I mean that if you do that, you close the door to giving straight access to a struct-implementation. Also, placing a value-type payload in a class is pretty trivial.
Aug 28 2013
prev sibling next sibling parent Joseph Rushton Wakeling <joseph.wakeling webdrake.net> writes:
On 29/08/13 08:45, monarch_dodra wrote:
 Yes, that'd be the idea, but I mean that if you do that, you close the door to
 giving straight access to a struct-implementation.

Understood, but I'm not sure that's as big an issue as you think. I exchanged some private emails with Manu and with Adam Ruppe, on how RNGs-as-final-classes might play with games or embedded programming, and I think the consensus was that it wasn't a big deal, so long as all the allocation happened up front. Adam suggested that e.g. one could avoid GC by allocating memory and using emplace(), so as long as the class is compatible with that approach, it's fine.
Aug 29 2013
prev sibling next sibling parent Joseph Rushton Wakeling <joseph.wakeling webdrake.net> writes:
On 27/08/13 08:16, monarch_dodra wrote:
 What bothers me about these is that they have a fixed seed value,
 which (IMO) is even worst than default seeding to
 "unpredicatableSeed"

 C's "rand" also "defaults seed". I've "tutored" C++ on codeguru
 for years, and years in and years out, I'd get asked "why does my
 random program keep creating the same output". This behavior
 surprises people.

 A PRNG should either error-out on no-seed, or be run-time
 unpredictably seeded. The middle ground just gives you the worst
 of both...

For what it's worth, I think that your experience is a consequence of the particular case of C. The "easy" way to generate a random number in C is via some variation on rand() / RAND_MAX , so that's what people do, and so they get hit by the default seed. On the other hand with D the default random number generator is rndGen, which is unpredictably seeded, and which is used if no other RNG is specified -- so _default_ random behaviour in D mimics that in interpreted languages and is different per program run. (Actually, I sometimes find myself seeding rndGen in order to get _the same_ results, which of course sometimes you do want:-) Personally I think there may be some value in having an accepted default configuration for RNGs, so long as it's correctly signposted what will happen, and so long as the "easy" thing to do is not going to fall into the trap you described. However, I think initialization is an important issue not just for RNGs but for the diversity of other entities that use them. This also impacts on the class/struct discussion. For example, with RNGs per se it's pretty trivial (with a class-based approach) that we have a constructor requiring a seed, and in addition (possibly) a default constructor that seeds with some default condition (let's leave aside preferences on this for now:-). The latter default-seed approach cannot be implemented with structs -- you can't have a no-parameter constructor -- so struct-based RNGs have to work round this in one of two ways: either by having default settings for all the internal values of the RNG state data (which e.g. Xorshift can do because it's a small total number of parameters), or by having conditionals which get triggered when front(), popFront() etc. are called, as in the Mersenne Twister implementation, which calls seed() if the value of the internal parameter mti is equal to size_t.max. The latter approach is very annoying because it means that e.g. front() cannot be const, which we'd like it to be. So, all of that makes final classes a nice approach, albeit we might compromise for other reasons. But it's not so simple for other random functions. Now consider random sampling, and the following 3 versions of code: { // 1 auto gen = Random(unpredictableSeed); auto sample = randomSample(iota(100), 10, gen); writeln(sample); writeln(uniform(0.0, 1.0, gen)); } { // 2 auto gen = Random(unpredictableSeed); auto sample = randomSample(iota(100), 10, gen); writeln(uniform(0.0, 1.0, gen)); writeln(sample); } { // 3 auto gen = Random(unpredictableSeed); writeln(uniform(0.0, 1.0, gen)); auto sample = randomSample(iota(100), 10, gen); writeln(sample); } What would you _expect_ to happen to the values of the random number and the random sample in each case? The most intuitive thing would be that the values of the sample are determined at the point it's created, so they would depend on where the line sample = ... occurs and on nothing else. However, we know that random samples are lazily evaluated. So, perhaps the second logical alternative is that its values should be determined _when it is read_, i.e. at the point where we writeln(sample). Now consider -- when is the _first_ "front" value of the sample set? If it's set at construction time (which would be normal for a range), the remaining values in the sample will depend on what calls to the RNG are put in in-between construction and reading. So, in this case, the samples from programs 1 and 2 will have the same _first_ value, but different subsequent values. That, to me, is both unintuitive and undesirable. Now it gets more complicated. In the case of RandomSample, we have a bunch of different public functions we can call. When front() is called, we need to check if the sample has been initialized before we return a value. Similarly, popFront() should behave differently depending on whether the first sample value has been determined yet. Then there is a method index() that returns the numerical index of the sampled value (so, if you sample from 0, 1, 2, .... you should have sample.front == sample.index always), and this too clearly depends on the sample being initialized before it can return. None of this can be solved with a struct vs. class difference, assuming we agree that it's inherently undesirable to initialize at construction time. This problem is potentially a rich source of bugs, as we saw with RandomCover in our recent pull request discussion. Relying on manual solutions seems undesirable -- you will get people (as I did) fixing front and popFront(), but forgetting about a function like RandomSample.index(). (Fix in the works:-) So, I'm wondering if there is a potential for a syntax sugar like invariant(), but which will not get kicked out when compiled with -release and which will enable essential checking of the internal state of the system whenever a public method is called. I'd suggest there actually be two, one to check entry condition, one exit.
Aug 29 2013
prev sibling next sibling parent "monarch_dodra" <monarchdodra gmail.com> writes:
On Thursday, 29 August 2013 at 10:16:17 UTC, Joseph Rushton 
Wakeling wrote:
 On 29/08/13 08:45, monarch_dodra wrote:
 Yes, that'd be the idea, but I mean that if you do that, you 
 close the door to
 giving straight access to a struct-implementation.

Understood, but I'm not sure that's as big an issue as you think. I exchanged some private emails with Manu and with Adam Ruppe, on how RNGs-as-final-classes might play with games or embedded programming, and I think the consensus was that it wasn't a big deal, so long as all the allocation happened up front.

Alright. Nice to now.
 Adam suggested that e.g. one could avoid GC by allocating 
 memory and using emplace(), so as long as the class is 
 compatible with that approach, it's fine.

This is actually much trickier than it looks. Check out "Scoped". AFAIK, it is the only type which does this, and it is not copyable.
Aug 29 2013
prev sibling parent Joseph Rushton Wakeling <joseph.wakeling webdrake.net> writes:
On 29/08/13 12:43, monarch_dodra wrote:
 This is actually much trickier than it looks. Check out "Scoped".

 AFAIK, it is the only type which does this, and it is not copyable.

Ack. Not fun. :-( I also note that due to this issue, classes don't play nice with the static function members necessary for e.g. rndGen: http://forum.dlang.org/thread/ojuvtluqkqazaejuqlff forum.dlang.org ... although that might be fixed soon.
Aug 29 2013